Cargando…

A Comparative Study in Real-Time Scene Sonification for Visually Impaired People

In recent years, with the development of depth cameras and scene detection algorithms, a wide variety of electronic travel aids for visually impaired people have been proposed. However, it is still challenging to convey scene information to visually impaired people efficiently. In this paper, we pro...

Descripción completa

Detalles Bibliográficos
Autores principales: Hu, Weijian, Wang, Kaiwei, Yang, Kailun, Cheng, Ruiqi, Ye, Yaozu, Sun, Lei, Xu, Zhijie
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2020
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7309097/
https://www.ncbi.nlm.nih.gov/pubmed/32517134
http://dx.doi.org/10.3390/s20113222
_version_ 1783549145629327360
author Hu, Weijian
Wang, Kaiwei
Yang, Kailun
Cheng, Ruiqi
Ye, Yaozu
Sun, Lei
Xu, Zhijie
author_facet Hu, Weijian
Wang, Kaiwei
Yang, Kailun
Cheng, Ruiqi
Ye, Yaozu
Sun, Lei
Xu, Zhijie
author_sort Hu, Weijian
collection PubMed
description In recent years, with the development of depth cameras and scene detection algorithms, a wide variety of electronic travel aids for visually impaired people have been proposed. However, it is still challenging to convey scene information to visually impaired people efficiently. In this paper, we propose three different auditory-based interaction methods, i.e., depth image sonification, obstacle sonification as well as path sonification, which convey raw depth images, obstacle information and path information respectively to visually impaired people. Three sonification methods are compared comprehensively through a field experiment attended by twelve visually impaired participants. The results show that the sonification of high-level scene information, such as the direction of pathway, is easier to learn and adapt, and is more suitable for point-to-point navigation. In contrast, through the sonification of low-level scene information, such as raw depth images, visually impaired people can understand the surrounding environment more comprehensively. Furthermore, there is no interaction method that is best suited for all participants in the experiment, and visually impaired individuals need a period of time to find the most suitable interaction method. Our findings highlight the features and the differences of three scene detection algorithms and the corresponding sonification methods. The results provide insights into the design of electronic travel aids, and the conclusions can also be applied in other fields, such as the sound feedback of virtual reality applications.
format Online
Article
Text
id pubmed-7309097
institution National Center for Biotechnology Information
language English
publishDate 2020
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-73090972020-06-25 A Comparative Study in Real-Time Scene Sonification for Visually Impaired People Hu, Weijian Wang, Kaiwei Yang, Kailun Cheng, Ruiqi Ye, Yaozu Sun, Lei Xu, Zhijie Sensors (Basel) Article In recent years, with the development of depth cameras and scene detection algorithms, a wide variety of electronic travel aids for visually impaired people have been proposed. However, it is still challenging to convey scene information to visually impaired people efficiently. In this paper, we propose three different auditory-based interaction methods, i.e., depth image sonification, obstacle sonification as well as path sonification, which convey raw depth images, obstacle information and path information respectively to visually impaired people. Three sonification methods are compared comprehensively through a field experiment attended by twelve visually impaired participants. The results show that the sonification of high-level scene information, such as the direction of pathway, is easier to learn and adapt, and is more suitable for point-to-point navigation. In contrast, through the sonification of low-level scene information, such as raw depth images, visually impaired people can understand the surrounding environment more comprehensively. Furthermore, there is no interaction method that is best suited for all participants in the experiment, and visually impaired individuals need a period of time to find the most suitable interaction method. Our findings highlight the features and the differences of three scene detection algorithms and the corresponding sonification methods. The results provide insights into the design of electronic travel aids, and the conclusions can also be applied in other fields, such as the sound feedback of virtual reality applications. MDPI 2020-06-05 /pmc/articles/PMC7309097/ /pubmed/32517134 http://dx.doi.org/10.3390/s20113222 Text en © 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Hu, Weijian
Wang, Kaiwei
Yang, Kailun
Cheng, Ruiqi
Ye, Yaozu
Sun, Lei
Xu, Zhijie
A Comparative Study in Real-Time Scene Sonification for Visually Impaired People
title A Comparative Study in Real-Time Scene Sonification for Visually Impaired People
title_full A Comparative Study in Real-Time Scene Sonification for Visually Impaired People
title_fullStr A Comparative Study in Real-Time Scene Sonification for Visually Impaired People
title_full_unstemmed A Comparative Study in Real-Time Scene Sonification for Visually Impaired People
title_short A Comparative Study in Real-Time Scene Sonification for Visually Impaired People
title_sort comparative study in real-time scene sonification for visually impaired people
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7309097/
https://www.ncbi.nlm.nih.gov/pubmed/32517134
http://dx.doi.org/10.3390/s20113222
work_keys_str_mv AT huweijian acomparativestudyinrealtimescenesonificationforvisuallyimpairedpeople
AT wangkaiwei acomparativestudyinrealtimescenesonificationforvisuallyimpairedpeople
AT yangkailun acomparativestudyinrealtimescenesonificationforvisuallyimpairedpeople
AT chengruiqi acomparativestudyinrealtimescenesonificationforvisuallyimpairedpeople
AT yeyaozu acomparativestudyinrealtimescenesonificationforvisuallyimpairedpeople
AT sunlei acomparativestudyinrealtimescenesonificationforvisuallyimpairedpeople
AT xuzhijie acomparativestudyinrealtimescenesonificationforvisuallyimpairedpeople
AT huweijian comparativestudyinrealtimescenesonificationforvisuallyimpairedpeople
AT wangkaiwei comparativestudyinrealtimescenesonificationforvisuallyimpairedpeople
AT yangkailun comparativestudyinrealtimescenesonificationforvisuallyimpairedpeople
AT chengruiqi comparativestudyinrealtimescenesonificationforvisuallyimpairedpeople
AT yeyaozu comparativestudyinrealtimescenesonificationforvisuallyimpairedpeople
AT sunlei comparativestudyinrealtimescenesonificationforvisuallyimpairedpeople
AT xuzhijie comparativestudyinrealtimescenesonificationforvisuallyimpairedpeople