Cargando…
An Audification and Visualization System (AVS) of an Autonomous Vehicle for Blind and Deaf People Based on Deep Learning
When blind and deaf people are passengers in fully autonomous vehicles, an intuitive and accurate visualization screen should be provided for the deaf, and an audification system with speech-to-text (STT) and text-to-speech (TTS) functions should be provided for the blind. However, these systems can...
Autores principales: | , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
MDPI
2019
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6891558/ https://www.ncbi.nlm.nih.gov/pubmed/31752247 http://dx.doi.org/10.3390/s19225035 |
_version_ | 1783475842480865280 |
---|---|
author | Son, Surak Jeong, YiNa Lee, Byungkwan |
author_facet | Son, Surak Jeong, YiNa Lee, Byungkwan |
author_sort | Son, Surak |
collection | PubMed |
description | When blind and deaf people are passengers in fully autonomous vehicles, an intuitive and accurate visualization screen should be provided for the deaf, and an audification system with speech-to-text (STT) and text-to-speech (TTS) functions should be provided for the blind. However, these systems cannot know the fault self-diagnosis information and the instrument cluster information that indicates the current state of the vehicle when driving. This paper proposes an audification and visualization system (AVS) of an autonomous vehicle for blind and deaf people based on deep learning to solve this problem. The AVS consists of three modules. The data collection and management module (DCMM) stores and manages the data collected from the vehicle. The audification conversion module (ACM) has a speech-to-text submodule (STS) that recognizes a user’s speech and converts it to text data, and a text-to-wave submodule (TWS) that converts text data to voice. The data visualization module (DVM) visualizes the collected sensor data, fault self-diagnosis data, etc., and places the visualized data according to the size of the vehicle’s display. The experiment shows that the time taken to adjust visualization graphic components in on-board diagnostics (OBD) was approximately 2.5 times faster than the time taken in a cloud server. In addition, the overall computational time of the AVS system was approximately 2 ms faster than the existing instrument cluster. Therefore, because the AVS proposed in this paper can enable blind and deaf people to select only what they want to hear and see, it reduces the overload of transmission and greatly increases the safety of the vehicle. If the AVS is introduced in a real vehicle, it can prevent accidents for disabled and other passengers in advance. |
format | Online Article Text |
id | pubmed-6891558 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2019 |
publisher | MDPI |
record_format | MEDLINE/PubMed |
spelling | pubmed-68915582019-12-18 An Audification and Visualization System (AVS) of an Autonomous Vehicle for Blind and Deaf People Based on Deep Learning Son, Surak Jeong, YiNa Lee, Byungkwan Sensors (Basel) Article When blind and deaf people are passengers in fully autonomous vehicles, an intuitive and accurate visualization screen should be provided for the deaf, and an audification system with speech-to-text (STT) and text-to-speech (TTS) functions should be provided for the blind. However, these systems cannot know the fault self-diagnosis information and the instrument cluster information that indicates the current state of the vehicle when driving. This paper proposes an audification and visualization system (AVS) of an autonomous vehicle for blind and deaf people based on deep learning to solve this problem. The AVS consists of three modules. The data collection and management module (DCMM) stores and manages the data collected from the vehicle. The audification conversion module (ACM) has a speech-to-text submodule (STS) that recognizes a user’s speech and converts it to text data, and a text-to-wave submodule (TWS) that converts text data to voice. The data visualization module (DVM) visualizes the collected sensor data, fault self-diagnosis data, etc., and places the visualized data according to the size of the vehicle’s display. The experiment shows that the time taken to adjust visualization graphic components in on-board diagnostics (OBD) was approximately 2.5 times faster than the time taken in a cloud server. In addition, the overall computational time of the AVS system was approximately 2 ms faster than the existing instrument cluster. Therefore, because the AVS proposed in this paper can enable blind and deaf people to select only what they want to hear and see, it reduces the overload of transmission and greatly increases the safety of the vehicle. If the AVS is introduced in a real vehicle, it can prevent accidents for disabled and other passengers in advance. MDPI 2019-11-18 /pmc/articles/PMC6891558/ /pubmed/31752247 http://dx.doi.org/10.3390/s19225035 Text en © 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/). |
spellingShingle | Article Son, Surak Jeong, YiNa Lee, Byungkwan An Audification and Visualization System (AVS) of an Autonomous Vehicle for Blind and Deaf People Based on Deep Learning |
title | An Audification and Visualization System (AVS) of an Autonomous Vehicle for Blind and Deaf People Based on Deep Learning |
title_full | An Audification and Visualization System (AVS) of an Autonomous Vehicle for Blind and Deaf People Based on Deep Learning |
title_fullStr | An Audification and Visualization System (AVS) of an Autonomous Vehicle for Blind and Deaf People Based on Deep Learning |
title_full_unstemmed | An Audification and Visualization System (AVS) of an Autonomous Vehicle for Blind and Deaf People Based on Deep Learning |
title_short | An Audification and Visualization System (AVS) of an Autonomous Vehicle for Blind and Deaf People Based on Deep Learning |
title_sort | audification and visualization system (avs) of an autonomous vehicle for blind and deaf people based on deep learning |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6891558/ https://www.ncbi.nlm.nih.gov/pubmed/31752247 http://dx.doi.org/10.3390/s19225035 |
work_keys_str_mv | AT sonsurak anaudificationandvisualizationsystemavsofanautonomousvehicleforblindanddeafpeoplebasedondeeplearning AT jeongyina anaudificationandvisualizationsystemavsofanautonomousvehicleforblindanddeafpeoplebasedondeeplearning AT leebyungkwan anaudificationandvisualizationsystemavsofanautonomousvehicleforblindanddeafpeoplebasedondeeplearning AT sonsurak audificationandvisualizationsystemavsofanautonomousvehicleforblindanddeafpeoplebasedondeeplearning AT jeongyina audificationandvisualizationsystemavsofanautonomousvehicleforblindanddeafpeoplebasedondeeplearning AT leebyungkwan audificationandvisualizationsystemavsofanautonomousvehicleforblindanddeafpeoplebasedondeeplearning |