Cargando…
Improvement of Acoustic Models Fused with Lip Visual Information for Low-Resource Speech
Endangered language generally has low-resource characteristics, as an immaterial cultural resource that cannot be renewed. Automatic speech recognition (ASR) is an effective means to protect this language. However, for low-resource language, native speakers are few and labeled corpora are insufficie...
Autores principales: | , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
MDPI
2023
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9959391/ https://www.ncbi.nlm.nih.gov/pubmed/36850669 http://dx.doi.org/10.3390/s23042071 |
_version_ | 1784895264638631936 |
---|---|
author | Yu, Chongchong Yu, Jiaqi Qian, Zhaopeng Tan, Yuchen |
author_facet | Yu, Chongchong Yu, Jiaqi Qian, Zhaopeng Tan, Yuchen |
author_sort | Yu, Chongchong |
collection | PubMed |
description | Endangered language generally has low-resource characteristics, as an immaterial cultural resource that cannot be renewed. Automatic speech recognition (ASR) is an effective means to protect this language. However, for low-resource language, native speakers are few and labeled corpora are insufficient. ASR, thus, suffers deficiencies including high speaker dependence and over fitting, which greatly harms the accuracy of recognition. To tackle the deficiencies, the paper puts forward an approach of audiovisual speech recognition (AVSR) based on LSTM-Transformer. The approach introduces visual modality information including lip movements to reduce the dependence of acoustic models on speakers and the quantity of data. Specifically, the new approach, through the fusion of audio and visual information, enhances the expression of speakers’ feature space, thus achieving the speaker adaptation that is difficult in a single modality. The approach also includes experiments on speaker dependence and evaluates to what extent audiovisual fusion is dependent on speakers. Experimental results show that the CER of AVSR is 16.9% lower than those of traditional models (optimal performance scenario), and 11.8% lower than that for lip reading. The accuracy for recognizing phonemes, especially finals, improves substantially. For recognizing initials, the accuracy improves for affricates and fricatives where the lip movements are obvious and deteriorates for stops where the lip movements are not obvious. In AVSR, the generalization onto different speakers is also better than in a single modality and the CER can drop by as much as 17.2%. Therefore, AVSR is of great significance in studying the protection and preservation of endangered languages through AI. |
format | Online Article Text |
id | pubmed-9959391 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2023 |
publisher | MDPI |
record_format | MEDLINE/PubMed |
spelling | pubmed-99593912023-02-26 Improvement of Acoustic Models Fused with Lip Visual Information for Low-Resource Speech Yu, Chongchong Yu, Jiaqi Qian, Zhaopeng Tan, Yuchen Sensors (Basel) Article Endangered language generally has low-resource characteristics, as an immaterial cultural resource that cannot be renewed. Automatic speech recognition (ASR) is an effective means to protect this language. However, for low-resource language, native speakers are few and labeled corpora are insufficient. ASR, thus, suffers deficiencies including high speaker dependence and over fitting, which greatly harms the accuracy of recognition. To tackle the deficiencies, the paper puts forward an approach of audiovisual speech recognition (AVSR) based on LSTM-Transformer. The approach introduces visual modality information including lip movements to reduce the dependence of acoustic models on speakers and the quantity of data. Specifically, the new approach, through the fusion of audio and visual information, enhances the expression of speakers’ feature space, thus achieving the speaker adaptation that is difficult in a single modality. The approach also includes experiments on speaker dependence and evaluates to what extent audiovisual fusion is dependent on speakers. Experimental results show that the CER of AVSR is 16.9% lower than those of traditional models (optimal performance scenario), and 11.8% lower than that for lip reading. The accuracy for recognizing phonemes, especially finals, improves substantially. For recognizing initials, the accuracy improves for affricates and fricatives where the lip movements are obvious and deteriorates for stops where the lip movements are not obvious. In AVSR, the generalization onto different speakers is also better than in a single modality and the CER can drop by as much as 17.2%. Therefore, AVSR is of great significance in studying the protection and preservation of endangered languages through AI. MDPI 2023-02-12 /pmc/articles/PMC9959391/ /pubmed/36850669 http://dx.doi.org/10.3390/s23042071 Text en © 2023 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). |
spellingShingle | Article Yu, Chongchong Yu, Jiaqi Qian, Zhaopeng Tan, Yuchen Improvement of Acoustic Models Fused with Lip Visual Information for Low-Resource Speech |
title | Improvement of Acoustic Models Fused with Lip Visual Information for Low-Resource Speech |
title_full | Improvement of Acoustic Models Fused with Lip Visual Information for Low-Resource Speech |
title_fullStr | Improvement of Acoustic Models Fused with Lip Visual Information for Low-Resource Speech |
title_full_unstemmed | Improvement of Acoustic Models Fused with Lip Visual Information for Low-Resource Speech |
title_short | Improvement of Acoustic Models Fused with Lip Visual Information for Low-Resource Speech |
title_sort | improvement of acoustic models fused with lip visual information for low-resource speech |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9959391/ https://www.ncbi.nlm.nih.gov/pubmed/36850669 http://dx.doi.org/10.3390/s23042071 |
work_keys_str_mv | AT yuchongchong improvementofacousticmodelsfusedwithlipvisualinformationforlowresourcespeech AT yujiaqi improvementofacousticmodelsfusedwithlipvisualinformationforlowresourcespeech AT qianzhaopeng improvementofacousticmodelsfusedwithlipvisualinformationforlowresourcespeech AT tanyuchen improvementofacousticmodelsfusedwithlipvisualinformationforlowresourcespeech |