Cargando…
Multimodal Sensor-Input Architecture with Deep Learning for Audio-Visual Speech Recognition in Wild
This paper investigates multimodal sensor architectures with deep learning for audio-visual speech recognition, focusing on in-the-wild scenarios. The term “in the wild” is used to describe AVSR for unconstrained natural-language audio streams and video-stream modalities. Audio-visual speech recogni...
Autores principales: | , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
MDPI
2023
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9959127/ https://www.ncbi.nlm.nih.gov/pubmed/36850432 http://dx.doi.org/10.3390/s23041834 |
_version_ | 1784895196864970752 |
---|---|
author | He, Yibo Seng, Kah Phooi Ang, Li Minn |
author_facet | He, Yibo Seng, Kah Phooi Ang, Li Minn |
author_sort | He, Yibo |
collection | PubMed |
description | This paper investigates multimodal sensor architectures with deep learning for audio-visual speech recognition, focusing on in-the-wild scenarios. The term “in the wild” is used to describe AVSR for unconstrained natural-language audio streams and video-stream modalities. Audio-visual speech recognition (AVSR) is a speech-recognition task that leverages both an audio input of a human voice and an aligned visual input of lip motions. However, since in-the-wild scenarios can include more noise, AVSR’s performance is affected. Here, we propose new improvements for AVSR models by incorporating data-augmentation techniques to generate more data samples for building the classification models. For the data-augmentation techniques, we utilized a combination of conventional approaches (e.g., flips and rotations), as well as newer approaches, such as generative adversarial networks (GANs). To validate the approaches, we used augmented data from well-known datasets (LRS2—Lip Reading Sentences 2 and LRS3) in the training process and testing was performed using the original data. The study and experimental results indicated that the proposed AVSR model and framework, combined with the augmentation approach, enhanced the performance of the AVSR framework in the wild for noisy datasets. Furthermore, in this study, we discuss the domains of automatic speech recognition (ASR) architectures and audio-visual speech recognition (AVSR) architectures and give a concise summary of the AVSR models that have been proposed. |
format | Online Article Text |
id | pubmed-9959127 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2023 |
publisher | MDPI |
record_format | MEDLINE/PubMed |
spelling | pubmed-99591272023-02-26 Multimodal Sensor-Input Architecture with Deep Learning for Audio-Visual Speech Recognition in Wild He, Yibo Seng, Kah Phooi Ang, Li Minn Sensors (Basel) Article This paper investigates multimodal sensor architectures with deep learning for audio-visual speech recognition, focusing on in-the-wild scenarios. The term “in the wild” is used to describe AVSR for unconstrained natural-language audio streams and video-stream modalities. Audio-visual speech recognition (AVSR) is a speech-recognition task that leverages both an audio input of a human voice and an aligned visual input of lip motions. However, since in-the-wild scenarios can include more noise, AVSR’s performance is affected. Here, we propose new improvements for AVSR models by incorporating data-augmentation techniques to generate more data samples for building the classification models. For the data-augmentation techniques, we utilized a combination of conventional approaches (e.g., flips and rotations), as well as newer approaches, such as generative adversarial networks (GANs). To validate the approaches, we used augmented data from well-known datasets (LRS2—Lip Reading Sentences 2 and LRS3) in the training process and testing was performed using the original data. The study and experimental results indicated that the proposed AVSR model and framework, combined with the augmentation approach, enhanced the performance of the AVSR framework in the wild for noisy datasets. Furthermore, in this study, we discuss the domains of automatic speech recognition (ASR) architectures and audio-visual speech recognition (AVSR) architectures and give a concise summary of the AVSR models that have been proposed. MDPI 2023-02-07 /pmc/articles/PMC9959127/ /pubmed/36850432 http://dx.doi.org/10.3390/s23041834 Text en © 2023 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). |
spellingShingle | Article He, Yibo Seng, Kah Phooi Ang, Li Minn Multimodal Sensor-Input Architecture with Deep Learning for Audio-Visual Speech Recognition in Wild |
title | Multimodal Sensor-Input Architecture with Deep Learning for Audio-Visual Speech Recognition in Wild |
title_full | Multimodal Sensor-Input Architecture with Deep Learning for Audio-Visual Speech Recognition in Wild |
title_fullStr | Multimodal Sensor-Input Architecture with Deep Learning for Audio-Visual Speech Recognition in Wild |
title_full_unstemmed | Multimodal Sensor-Input Architecture with Deep Learning for Audio-Visual Speech Recognition in Wild |
title_short | Multimodal Sensor-Input Architecture with Deep Learning for Audio-Visual Speech Recognition in Wild |
title_sort | multimodal sensor-input architecture with deep learning for audio-visual speech recognition in wild |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9959127/ https://www.ncbi.nlm.nih.gov/pubmed/36850432 http://dx.doi.org/10.3390/s23041834 |
work_keys_str_mv | AT heyibo multimodalsensorinputarchitecturewithdeeplearningforaudiovisualspeechrecognitioninwild AT sengkahphooi multimodalsensorinputarchitecturewithdeeplearningforaudiovisualspeechrecognitioninwild AT angliminn multimodalsensorinputarchitecturewithdeeplearningforaudiovisualspeechrecognitioninwild |