Cargando…

Vision Transformer Customized for Environment Detection and Collision Prediction to Assist the Visually Impaired

This paper presents a system that utilizes vision transformers and multimodal feedback modules to facilitate navigation and collision avoidance for the visually impaired. By implementing vision transformers, the system achieves accurate object detection, enabling the real-time identification of obje...

Descripción completa

Detalles Bibliográficos
Autores principales: Bayat, Nasrin, Kim, Jong-Hwan, Choudhury, Renoa, Kadhim, Ibrahim F., Al-Mashhadani, Zubaidah, Aldritz Dela Virgen, Mark, Latorre, Reuben, De La Paz, Ricardo, Park, Joon-Hyuk
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10455554/
https://www.ncbi.nlm.nih.gov/pubmed/37623693
http://dx.doi.org/10.3390/jimaging9080161
Descripción
Sumario:This paper presents a system that utilizes vision transformers and multimodal feedback modules to facilitate navigation and collision avoidance for the visually impaired. By implementing vision transformers, the system achieves accurate object detection, enabling the real-time identification of objects in front of the user. Semantic segmentation and the algorithms developed in this work provide a means to generate a trajectory vector of all identified objects from the vision transformer and to detect objects that are likely to intersect with the user’s walking path. Audio and vibrotactile feedback modules are integrated to convey collision warning through multimodal feedback. The dataset used to create the model was captured from both indoor and outdoor settings under different weather conditions at different times across multiple days, resulting in 27,867 photos consisting of 24 different classes. Classification results showed good performance (95% accuracy), supporting the efficacy and reliability of the proposed model. The design and control methods of the multimodal feedback modules for collision warning are also presented, while the experimental validation concerning their usability and efficiency stands as an upcoming endeavor. The demonstrated performance of the vision transformer and the presented algorithms in conjunction with the multimodal feedback modules show promising prospects of its feasibility and applicability for the navigation assistance of individuals with vision impairment.