Cargando…

Pedestrian detection model based on Tiny-Yolov3 architecture for wearable devices to visually impaired assistance

Introduction: Wearable assistive devices for the visually impaired whose technology is based on video camera devices represent a challenge in rapid evolution, where one of the main problems is to find computer vision algorithms that can be implemented in low-cost embedded devices. Objectives and Met...

Descripción completa

Detalles Bibliográficos
Autores principales: Maya-Martínez, Sergio-Uriel, Argüelles-Cruz, Amadeo-José, Guzmán-Zavaleta, Zobeida-Jezabel, Ramírez-Cadena, Miguel-de-Jesús
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Frontiers Media S.A. 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10061079/
https://www.ncbi.nlm.nih.gov/pubmed/37008985
http://dx.doi.org/10.3389/frobt.2023.1052509
_version_ 1785017220329373696
author Maya-Martínez, Sergio-Uriel
Argüelles-Cruz, Amadeo-José
Guzmán-Zavaleta, Zobeida-Jezabel
Ramírez-Cadena, Miguel-de-Jesús
author_facet Maya-Martínez, Sergio-Uriel
Argüelles-Cruz, Amadeo-José
Guzmán-Zavaleta, Zobeida-Jezabel
Ramírez-Cadena, Miguel-de-Jesús
author_sort Maya-Martínez, Sergio-Uriel
collection PubMed
description Introduction: Wearable assistive devices for the visually impaired whose technology is based on video camera devices represent a challenge in rapid evolution, where one of the main problems is to find computer vision algorithms that can be implemented in low-cost embedded devices. Objectives and Methods: This work presents a Tiny You Only Look Once architecture for pedestrian detection, which can be implemented in low-cost wearable devices as an alternative for the development of assistive technologies for the visually impaired. Results: The recall results of the proposed refined model represent an improvement of 71% working with four anchor boxes and 66% with six anchor boxes compared to the original model. The accuracy achieved on the same data set shows an increase of 14% and 25%, respectively. The F1 calculation shows a refinement of 57% and 55%. The average accuracy of the models achieved an improvement of 87% and 99%. The number of correctly detected objects was 3098 and 2892 for four and six anchor boxes, respectively, whose performance is better by 77% and 65% compared to the original, which correctly detected 1743 objects. Discussion: Finally, the model was optimized for the Jetson Nano embedded system, a case study for low-power embedded devices, and in a desktop computer. In both cases, the graphics processing unit (GPU) and central processing unit were tested, and a documented comparison of solutions aimed at serving visually impaired people was performed. Conclusion: We performed the desktop tests with a RTX 2070S graphics card, and the image processing took about 2.8 ms. The Jetson Nano board could process an image in about 110 ms, offering the opportunity to generate alert notification procedures in support of visually impaired mobility.
format Online
Article
Text
id pubmed-10061079
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher Frontiers Media S.A.
record_format MEDLINE/PubMed
spelling pubmed-100610792023-03-31 Pedestrian detection model based on Tiny-Yolov3 architecture for wearable devices to visually impaired assistance Maya-Martínez, Sergio-Uriel Argüelles-Cruz, Amadeo-José Guzmán-Zavaleta, Zobeida-Jezabel Ramírez-Cadena, Miguel-de-Jesús Front Robot AI Robotics and AI Introduction: Wearable assistive devices for the visually impaired whose technology is based on video camera devices represent a challenge in rapid evolution, where one of the main problems is to find computer vision algorithms that can be implemented in low-cost embedded devices. Objectives and Methods: This work presents a Tiny You Only Look Once architecture for pedestrian detection, which can be implemented in low-cost wearable devices as an alternative for the development of assistive technologies for the visually impaired. Results: The recall results of the proposed refined model represent an improvement of 71% working with four anchor boxes and 66% with six anchor boxes compared to the original model. The accuracy achieved on the same data set shows an increase of 14% and 25%, respectively. The F1 calculation shows a refinement of 57% and 55%. The average accuracy of the models achieved an improvement of 87% and 99%. The number of correctly detected objects was 3098 and 2892 for four and six anchor boxes, respectively, whose performance is better by 77% and 65% compared to the original, which correctly detected 1743 objects. Discussion: Finally, the model was optimized for the Jetson Nano embedded system, a case study for low-power embedded devices, and in a desktop computer. In both cases, the graphics processing unit (GPU) and central processing unit were tested, and a documented comparison of solutions aimed at serving visually impaired people was performed. Conclusion: We performed the desktop tests with a RTX 2070S graphics card, and the image processing took about 2.8 ms. The Jetson Nano board could process an image in about 110 ms, offering the opportunity to generate alert notification procedures in support of visually impaired mobility. Frontiers Media S.A. 2023-03-16 /pmc/articles/PMC10061079/ /pubmed/37008985 http://dx.doi.org/10.3389/frobt.2023.1052509 Text en Copyright © 2023 Maya-Martínez, Argüelles-Cruz, Guzmán-Zavaleta and Ramírez-Cadena. https://creativecommons.org/licenses/by/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
spellingShingle Robotics and AI
Maya-Martínez, Sergio-Uriel
Argüelles-Cruz, Amadeo-José
Guzmán-Zavaleta, Zobeida-Jezabel
Ramírez-Cadena, Miguel-de-Jesús
Pedestrian detection model based on Tiny-Yolov3 architecture for wearable devices to visually impaired assistance
title Pedestrian detection model based on Tiny-Yolov3 architecture for wearable devices to visually impaired assistance
title_full Pedestrian detection model based on Tiny-Yolov3 architecture for wearable devices to visually impaired assistance
title_fullStr Pedestrian detection model based on Tiny-Yolov3 architecture for wearable devices to visually impaired assistance
title_full_unstemmed Pedestrian detection model based on Tiny-Yolov3 architecture for wearable devices to visually impaired assistance
title_short Pedestrian detection model based on Tiny-Yolov3 architecture for wearable devices to visually impaired assistance
title_sort pedestrian detection model based on tiny-yolov3 architecture for wearable devices to visually impaired assistance
topic Robotics and AI
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10061079/
https://www.ncbi.nlm.nih.gov/pubmed/37008985
http://dx.doi.org/10.3389/frobt.2023.1052509
work_keys_str_mv AT mayamartinezsergiouriel pedestriandetectionmodelbasedontinyyolov3architectureforwearabledevicestovisuallyimpairedassistance
AT arguellescruzamadeojose pedestriandetectionmodelbasedontinyyolov3architectureforwearabledevicestovisuallyimpairedassistance
AT guzmanzavaletazobeidajezabel pedestriandetectionmodelbasedontinyyolov3architectureforwearabledevicestovisuallyimpairedassistance
AT ramirezcadenamigueldejesus pedestriandetectionmodelbasedontinyyolov3architectureforwearabledevicestovisuallyimpairedassistance