Cargando…

Optimized deep learning vision system for human action recognition from drone images

There are several benefits to constructing a lightweight vision system that is implemented directly on limited hardware devices. Most deep learning-based computer vision systems, such as YOLO (You Only Look Once), use computationally expensive backbone feature extractor networks, such as ResNet and...

Descripción completa

Detalles Bibliográficos
Autores principales: Samma, Hussein, Sama, Ali Salem Bin
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Springer US 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10234799/
https://www.ncbi.nlm.nih.gov/pubmed/37362716
http://dx.doi.org/10.1007/s11042-023-15930-9
_version_ 1785052576377470976
author Samma, Hussein
Sama, Ali Salem Bin
author_facet Samma, Hussein
Sama, Ali Salem Bin
author_sort Samma, Hussein
collection PubMed
description There are several benefits to constructing a lightweight vision system that is implemented directly on limited hardware devices. Most deep learning-based computer vision systems, such as YOLO (You Only Look Once), use computationally expensive backbone feature extractor networks, such as ResNet and Inception network. To address the issue of network complexity, researchers created SqueezeNet, an alternative compressed and diminutive network. However, SqueezeNet was trained to recognize 1000 unique objects as a broad classification system. This work integrates a two-layer particle swarm optimizer (TLPSO) into YOLO to reduce the contribution of SqueezeNet convolutional filters that have contributed less to human action recognition. In short, this work introduces a lightweight vision system with an optimized SqueezeNet backbone feature extraction network. Secondly, it does so without sacrificing accuracy. This is because that the high-dimensional SqueezeNet convolutional filter selection is supported by the efficient TLPSO algorithm. The proposed vision system has been used to the recognition of human behaviors from drone-mounted camera images. This study focused on two separate motions, namely walking and running. As a consequence, a total of 300 pictures were taken at various places, angles, and weather conditions, with 100 shots capturing running and 200 images capturing walking. The TLPSO technique lowered SqueezeNet’s convolutional filters by 52%, resulting in a sevenfold boost in detection speed. With an F1 score of 94.65% and an inference time of 0.061 milliseconds, the suggested system beat earlier vision systems in terms of human recognition from drone-based photographs. In addition, the performance assessment of TLPSO in comparison to other related optimizers found that TLPSO had a better convergence curve and achieved a higher fitness value. In statistical comparisons, TLPSO surpassed PSO and RLMPSO by a wide margin.
format Online
Article
Text
id pubmed-10234799
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher Springer US
record_format MEDLINE/PubMed
spelling pubmed-102347992023-06-06 Optimized deep learning vision system for human action recognition from drone images Samma, Hussein Sama, Ali Salem Bin Multimed Tools Appl Article There are several benefits to constructing a lightweight vision system that is implemented directly on limited hardware devices. Most deep learning-based computer vision systems, such as YOLO (You Only Look Once), use computationally expensive backbone feature extractor networks, such as ResNet and Inception network. To address the issue of network complexity, researchers created SqueezeNet, an alternative compressed and diminutive network. However, SqueezeNet was trained to recognize 1000 unique objects as a broad classification system. This work integrates a two-layer particle swarm optimizer (TLPSO) into YOLO to reduce the contribution of SqueezeNet convolutional filters that have contributed less to human action recognition. In short, this work introduces a lightweight vision system with an optimized SqueezeNet backbone feature extraction network. Secondly, it does so without sacrificing accuracy. This is because that the high-dimensional SqueezeNet convolutional filter selection is supported by the efficient TLPSO algorithm. The proposed vision system has been used to the recognition of human behaviors from drone-mounted camera images. This study focused on two separate motions, namely walking and running. As a consequence, a total of 300 pictures were taken at various places, angles, and weather conditions, with 100 shots capturing running and 200 images capturing walking. The TLPSO technique lowered SqueezeNet’s convolutional filters by 52%, resulting in a sevenfold boost in detection speed. With an F1 score of 94.65% and an inference time of 0.061 milliseconds, the suggested system beat earlier vision systems in terms of human recognition from drone-based photographs. In addition, the performance assessment of TLPSO in comparison to other related optimizers found that TLPSO had a better convergence curve and achieved a higher fitness value. In statistical comparisons, TLPSO surpassed PSO and RLMPSO by a wide margin. Springer US 2023-06-02 /pmc/articles/PMC10234799/ /pubmed/37362716 http://dx.doi.org/10.1007/s11042-023-15930-9 Text en © The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2023. Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law. This article is made available via the PMC Open Access Subset for unrestricted research re-use and secondary analysis in any form or by any means with acknowledgement of the original source. These permissions are granted for the duration of the World Health Organization (WHO) declaration of COVID-19 as a global pandemic.
spellingShingle Article
Samma, Hussein
Sama, Ali Salem Bin
Optimized deep learning vision system for human action recognition from drone images
title Optimized deep learning vision system for human action recognition from drone images
title_full Optimized deep learning vision system for human action recognition from drone images
title_fullStr Optimized deep learning vision system for human action recognition from drone images
title_full_unstemmed Optimized deep learning vision system for human action recognition from drone images
title_short Optimized deep learning vision system for human action recognition from drone images
title_sort optimized deep learning vision system for human action recognition from drone images
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10234799/
https://www.ncbi.nlm.nih.gov/pubmed/37362716
http://dx.doi.org/10.1007/s11042-023-15930-9
work_keys_str_mv AT sammahussein optimizeddeeplearningvisionsystemforhumanactionrecognitionfromdroneimages
AT samaalisalembin optimizeddeeplearningvisionsystemforhumanactionrecognitionfromdroneimages