Cargando…

Development of Language Models for Continuous Uzbek Speech Recognition System

Automatic speech recognition systems with a large vocabulary and other natural language processing applications cannot operate without a language model. Most studies on pre-trained language models have focused on more popular languages such as English, Chinese, and various European languages, but th...

Descripción completa

Detalles Bibliográficos
Autores principales: Mukhamadiyev, Abdinabi, Mukhiddinov, Mukhriddin, Khujayarov, Ilyos, Ochilov, Mannon, Cho, Jinsoo
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9919949/
https://www.ncbi.nlm.nih.gov/pubmed/36772184
http://dx.doi.org/10.3390/s23031145
_version_ 1784886949869256704
author Mukhamadiyev, Abdinabi
Mukhiddinov, Mukhriddin
Khujayarov, Ilyos
Ochilov, Mannon
Cho, Jinsoo
author_facet Mukhamadiyev, Abdinabi
Mukhiddinov, Mukhriddin
Khujayarov, Ilyos
Ochilov, Mannon
Cho, Jinsoo
author_sort Mukhamadiyev, Abdinabi
collection PubMed
description Automatic speech recognition systems with a large vocabulary and other natural language processing applications cannot operate without a language model. Most studies on pre-trained language models have focused on more popular languages such as English, Chinese, and various European languages, but there is no publicly available Uzbek speech dataset. Therefore, language models of low-resource languages need to be studied and created. The objective of this study is to address this limitation by developing a low-resource language model for the Uzbek language and understanding linguistic occurrences. We proposed the Uzbek language model named UzLM by examining the performance of statistical and neural-network-based language models that account for the unique features of the Uzbek language. Our Uzbek-specific linguistic representation allows us to construct more robust UzLM, utilizing 80 million words from various sources while using the same or fewer training words, as applied in previous studies. Roughly sixty-eight thousand different words and 15 million sentences were collected for the creation of this corpus. The experimental results of our tests on the continuous recognition of Uzbek speech show that, compared with manual encoding, the use of neural-network-based language models reduced the character error rate to 5.26%.
format Online
Article
Text
id pubmed-9919949
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-99199492023-02-12 Development of Language Models for Continuous Uzbek Speech Recognition System Mukhamadiyev, Abdinabi Mukhiddinov, Mukhriddin Khujayarov, Ilyos Ochilov, Mannon Cho, Jinsoo Sensors (Basel) Article Automatic speech recognition systems with a large vocabulary and other natural language processing applications cannot operate without a language model. Most studies on pre-trained language models have focused on more popular languages such as English, Chinese, and various European languages, but there is no publicly available Uzbek speech dataset. Therefore, language models of low-resource languages need to be studied and created. The objective of this study is to address this limitation by developing a low-resource language model for the Uzbek language and understanding linguistic occurrences. We proposed the Uzbek language model named UzLM by examining the performance of statistical and neural-network-based language models that account for the unique features of the Uzbek language. Our Uzbek-specific linguistic representation allows us to construct more robust UzLM, utilizing 80 million words from various sources while using the same or fewer training words, as applied in previous studies. Roughly sixty-eight thousand different words and 15 million sentences were collected for the creation of this corpus. The experimental results of our tests on the continuous recognition of Uzbek speech show that, compared with manual encoding, the use of neural-network-based language models reduced the character error rate to 5.26%. MDPI 2023-01-19 /pmc/articles/PMC9919949/ /pubmed/36772184 http://dx.doi.org/10.3390/s23031145 Text en © 2023 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Mukhamadiyev, Abdinabi
Mukhiddinov, Mukhriddin
Khujayarov, Ilyos
Ochilov, Mannon
Cho, Jinsoo
Development of Language Models for Continuous Uzbek Speech Recognition System
title Development of Language Models for Continuous Uzbek Speech Recognition System
title_full Development of Language Models for Continuous Uzbek Speech Recognition System
title_fullStr Development of Language Models for Continuous Uzbek Speech Recognition System
title_full_unstemmed Development of Language Models for Continuous Uzbek Speech Recognition System
title_short Development of Language Models for Continuous Uzbek Speech Recognition System
title_sort development of language models for continuous uzbek speech recognition system
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9919949/
https://www.ncbi.nlm.nih.gov/pubmed/36772184
http://dx.doi.org/10.3390/s23031145
work_keys_str_mv AT mukhamadiyevabdinabi developmentoflanguagemodelsforcontinuousuzbekspeechrecognitionsystem
AT mukhiddinovmukhriddin developmentoflanguagemodelsforcontinuousuzbekspeechrecognitionsystem
AT khujayarovilyos developmentoflanguagemodelsforcontinuousuzbekspeechrecognitionsystem
AT ochilovmannon developmentoflanguagemodelsforcontinuousuzbekspeechrecognitionsystem
AT chojinsoo developmentoflanguagemodelsforcontinuousuzbekspeechrecognitionsystem