Cargando…
Audio-Visual Speech and Gesture Recognition by Sensors of Mobile Devices
Audio-visual speech recognition (AVSR) is one of the most promising solutions for reliable speech recognition, particularly when audio is corrupted by noise. Additional visual information can be used for both automatic lip-reading and gesture recognition. Hand gestures are a form of non-verbal commu...
Autores principales: | , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
MDPI
2023
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9967234/ https://www.ncbi.nlm.nih.gov/pubmed/36850882 http://dx.doi.org/10.3390/s23042284 |
_version_ | 1784897213753720832 |
---|---|
author | Ryumin, Dmitry Ivanko, Denis Ryumina, Elena |
author_facet | Ryumin, Dmitry Ivanko, Denis Ryumina, Elena |
author_sort | Ryumin, Dmitry |
collection | PubMed |
description | Audio-visual speech recognition (AVSR) is one of the most promising solutions for reliable speech recognition, particularly when audio is corrupted by noise. Additional visual information can be used for both automatic lip-reading and gesture recognition. Hand gestures are a form of non-verbal communication and can be used as a very important part of modern human–computer interaction systems. Currently, audio and video modalities are easily accessible by sensors of mobile devices. However, there is no out-of-the-box solution for automatic audio-visual speech and gesture recognition. This study introduces two deep neural network-based model architectures: one for AVSR and one for gesture recognition. The main novelty regarding audio-visual speech recognition lies in fine-tuning strategies for both visual and acoustic features and in the proposed end-to-end model, which considers three modality fusion approaches: prediction-level, feature-level, and model-level. The main novelty in gesture recognition lies in a unique set of spatio-temporal features, including those that consider lip articulation information. As there are no available datasets for the combined task, we evaluated our methods on two different large-scale corpora—LRW and AUTSL—and outperformed existing methods on both audio-visual speech recognition and gesture recognition tasks. We achieved AVSR accuracy for the LRW dataset equal to 98.76% and gesture recognition rate for the AUTSL dataset equal to 98.56%. The results obtained demonstrate not only the high performance of the proposed methodology, but also the fundamental possibility of recognizing audio-visual speech and gestures by sensors of mobile devices. |
format | Online Article Text |
id | pubmed-9967234 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2023 |
publisher | MDPI |
record_format | MEDLINE/PubMed |
spelling | pubmed-99672342023-02-26 Audio-Visual Speech and Gesture Recognition by Sensors of Mobile Devices Ryumin, Dmitry Ivanko, Denis Ryumina, Elena Sensors (Basel) Article Audio-visual speech recognition (AVSR) is one of the most promising solutions for reliable speech recognition, particularly when audio is corrupted by noise. Additional visual information can be used for both automatic lip-reading and gesture recognition. Hand gestures are a form of non-verbal communication and can be used as a very important part of modern human–computer interaction systems. Currently, audio and video modalities are easily accessible by sensors of mobile devices. However, there is no out-of-the-box solution for automatic audio-visual speech and gesture recognition. This study introduces two deep neural network-based model architectures: one for AVSR and one for gesture recognition. The main novelty regarding audio-visual speech recognition lies in fine-tuning strategies for both visual and acoustic features and in the proposed end-to-end model, which considers three modality fusion approaches: prediction-level, feature-level, and model-level. The main novelty in gesture recognition lies in a unique set of spatio-temporal features, including those that consider lip articulation information. As there are no available datasets for the combined task, we evaluated our methods on two different large-scale corpora—LRW and AUTSL—and outperformed existing methods on both audio-visual speech recognition and gesture recognition tasks. We achieved AVSR accuracy for the LRW dataset equal to 98.76% and gesture recognition rate for the AUTSL dataset equal to 98.56%. The results obtained demonstrate not only the high performance of the proposed methodology, but also the fundamental possibility of recognizing audio-visual speech and gestures by sensors of mobile devices. MDPI 2023-02-17 /pmc/articles/PMC9967234/ /pubmed/36850882 http://dx.doi.org/10.3390/s23042284 Text en © 2023 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). |
spellingShingle | Article Ryumin, Dmitry Ivanko, Denis Ryumina, Elena Audio-Visual Speech and Gesture Recognition by Sensors of Mobile Devices |
title | Audio-Visual Speech and Gesture Recognition by Sensors of Mobile Devices |
title_full | Audio-Visual Speech and Gesture Recognition by Sensors of Mobile Devices |
title_fullStr | Audio-Visual Speech and Gesture Recognition by Sensors of Mobile Devices |
title_full_unstemmed | Audio-Visual Speech and Gesture Recognition by Sensors of Mobile Devices |
title_short | Audio-Visual Speech and Gesture Recognition by Sensors of Mobile Devices |
title_sort | audio-visual speech and gesture recognition by sensors of mobile devices |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9967234/ https://www.ncbi.nlm.nih.gov/pubmed/36850882 http://dx.doi.org/10.3390/s23042284 |
work_keys_str_mv | AT ryumindmitry audiovisualspeechandgesturerecognitionbysensorsofmobiledevices AT ivankodenis audiovisualspeechandgesturerecognitionbysensorsofmobiledevices AT ryuminaelena audiovisualspeechandgesturerecognitionbysensorsofmobiledevices |