Cargando…

GL-YOLO-Lite: A Novel Lightweight Fallen Person Detection Model

The detection of a fallen person (FPD) is a crucial task in guaranteeing individual safety. Although deep-learning models have shown potential in addressing this challenge, they face several obstacles, such as the inadequate utilization of global contextual information, poor feature extraction, and...

Descripción completa

Detalles Bibliográficos
Autores principales: Dai, Yuan, Liu, Weiming
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10137530/
https://www.ncbi.nlm.nih.gov/pubmed/37190375
http://dx.doi.org/10.3390/e25040587
_version_ 1785032487310721024
author Dai, Yuan
Liu, Weiming
author_facet Dai, Yuan
Liu, Weiming
author_sort Dai, Yuan
collection PubMed
description The detection of a fallen person (FPD) is a crucial task in guaranteeing individual safety. Although deep-learning models have shown potential in addressing this challenge, they face several obstacles, such as the inadequate utilization of global contextual information, poor feature extraction, and substantial computational requirements. These limitations have led to low detection accuracy, poor generalization, and slow inference speeds. To overcome these challenges, the present study proposed a new lightweight detection model named Global and Local You-Only-Look-Once Lite (GL-YOLO-Lite), which integrates both global and local contextual information by incorporating transformer and attention modules into the popular object-detection framework YOLOv5. Specifically, a stem module replaced the original inefficient focus module, and rep modules with re-parameterization technology were introduced. Furthermore, a lightweight detection head was developed to reduce the number of redundant channels in the model. Finally, we constructed a large-scale, well-formatted FPD dataset (FPDD). The proposed model employed a binary cross-entropy (BCE) function to calculate the classification and confidence losses. An experimental evaluation of the FPDD and Pascal VOC dataset demonstrated that GL-YOLO-Lite outperformed other state-of-the-art models with significant margins, achieving 2.4–18.9 mean average precision (mAP) on FPDD and 1.8–23.3 on the Pascal VOC dataset. Moreover, GL-YOLO-Lite maintained a real-time processing speed of 56.82 frames per second (FPS) on a Titan Xp and 16.45 FPS on a HiSilicon Kirin 980, demonstrating its effectiveness in real-world scenarios.
format Online
Article
Text
id pubmed-10137530
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-101375302023-04-28 GL-YOLO-Lite: A Novel Lightweight Fallen Person Detection Model Dai, Yuan Liu, Weiming Entropy (Basel) Article The detection of a fallen person (FPD) is a crucial task in guaranteeing individual safety. Although deep-learning models have shown potential in addressing this challenge, they face several obstacles, such as the inadequate utilization of global contextual information, poor feature extraction, and substantial computational requirements. These limitations have led to low detection accuracy, poor generalization, and slow inference speeds. To overcome these challenges, the present study proposed a new lightweight detection model named Global and Local You-Only-Look-Once Lite (GL-YOLO-Lite), which integrates both global and local contextual information by incorporating transformer and attention modules into the popular object-detection framework YOLOv5. Specifically, a stem module replaced the original inefficient focus module, and rep modules with re-parameterization technology were introduced. Furthermore, a lightweight detection head was developed to reduce the number of redundant channels in the model. Finally, we constructed a large-scale, well-formatted FPD dataset (FPDD). The proposed model employed a binary cross-entropy (BCE) function to calculate the classification and confidence losses. An experimental evaluation of the FPDD and Pascal VOC dataset demonstrated that GL-YOLO-Lite outperformed other state-of-the-art models with significant margins, achieving 2.4–18.9 mean average precision (mAP) on FPDD and 1.8–23.3 on the Pascal VOC dataset. Moreover, GL-YOLO-Lite maintained a real-time processing speed of 56.82 frames per second (FPS) on a Titan Xp and 16.45 FPS on a HiSilicon Kirin 980, demonstrating its effectiveness in real-world scenarios. MDPI 2023-03-29 /pmc/articles/PMC10137530/ /pubmed/37190375 http://dx.doi.org/10.3390/e25040587 Text en © 2023 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Dai, Yuan
Liu, Weiming
GL-YOLO-Lite: A Novel Lightweight Fallen Person Detection Model
title GL-YOLO-Lite: A Novel Lightweight Fallen Person Detection Model
title_full GL-YOLO-Lite: A Novel Lightweight Fallen Person Detection Model
title_fullStr GL-YOLO-Lite: A Novel Lightweight Fallen Person Detection Model
title_full_unstemmed GL-YOLO-Lite: A Novel Lightweight Fallen Person Detection Model
title_short GL-YOLO-Lite: A Novel Lightweight Fallen Person Detection Model
title_sort gl-yolo-lite: a novel lightweight fallen person detection model
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10137530/
https://www.ncbi.nlm.nih.gov/pubmed/37190375
http://dx.doi.org/10.3390/e25040587
work_keys_str_mv AT daiyuan glyololiteanovellightweightfallenpersondetectionmodel
AT liuweiming glyololiteanovellightweightfallenpersondetectionmodel