Cargando…

Convolutional Neural Network Based Real Time Arabic Speech Recognition to Arabic Braille for Hearing and Visually Impaired

Natural Language Processing (NLP) is a group of theoretically inspired computer structures for analyzing and modeling clearly going on texts at one or extra degrees of linguistic evaluation to acquire human-like language processing for quite a few activities and applications. Hearing and visually im...

Descripción completa

Detalles Bibliográficos
Autores principales: Bhatia, Surbhi, Devi, Ajantha, Alsuwailem, Razan Ibrahim, Mashat, Arwa
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Frontiers Media S.A. 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9197259/
https://www.ncbi.nlm.nih.gov/pubmed/35712297
http://dx.doi.org/10.3389/fpubh.2022.898355
_version_ 1784727367511441408
author Bhatia, Surbhi
Devi, Ajantha
Alsuwailem, Razan Ibrahim
Mashat, Arwa
author_facet Bhatia, Surbhi
Devi, Ajantha
Alsuwailem, Razan Ibrahim
Mashat, Arwa
author_sort Bhatia, Surbhi
collection PubMed
description Natural Language Processing (NLP) is a group of theoretically inspired computer structures for analyzing and modeling clearly going on texts at one or extra degrees of linguistic evaluation to acquire human-like language processing for quite a few activities and applications. Hearing and visually impaired people are unable to see entirely or have very low vision, as well as being unable to hear completely or having a hard time hearing. It is difficult to get information since both hearing and vision, which are crucial organs for receiving information, are harmed. Hearing and visually impaired people are considered to have a substantial information deficit, as opposed to people who just have one handicap, such as blindness or deafness. Visually and hearing-impaired people who are unable to communicate with the outside world may experience emotional loneliness, which can lead to stress and, in extreme cases, serious mental illness. As a result, overcoming information handicap is a critical issue for visually and hearing-impaired people who want to live active, independent lives in society. The major objective of this study is to recognize Arabic speech in real time and convert it to Arabic text using Convolutional Neural Network-based algorithms before saving it to an SD card. The Arabic text is then translated into Arabic Braille characters, which are then used to control the Braille pattern via a Braille display with a solenoid drive. The Braille lettering triggered on the finger was deciphered by visually and hearing challenged participants who were proficient in Braille reading. The CNN, in combination with the ReLU model learning parameters, is fine-tuned for optimization, resulting in a model training accuracy of 90%. The tuned parameters model's testing results show that adding the ReLU activation function to the CNN model improves recognition accuracy by 84 % when speaking Arabic digits.
format Online
Article
Text
id pubmed-9197259
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher Frontiers Media S.A.
record_format MEDLINE/PubMed
spelling pubmed-91972592022-06-15 Convolutional Neural Network Based Real Time Arabic Speech Recognition to Arabic Braille for Hearing and Visually Impaired Bhatia, Surbhi Devi, Ajantha Alsuwailem, Razan Ibrahim Mashat, Arwa Front Public Health Public Health Natural Language Processing (NLP) is a group of theoretically inspired computer structures for analyzing and modeling clearly going on texts at one or extra degrees of linguistic evaluation to acquire human-like language processing for quite a few activities and applications. Hearing and visually impaired people are unable to see entirely or have very low vision, as well as being unable to hear completely or having a hard time hearing. It is difficult to get information since both hearing and vision, which are crucial organs for receiving information, are harmed. Hearing and visually impaired people are considered to have a substantial information deficit, as opposed to people who just have one handicap, such as blindness or deafness. Visually and hearing-impaired people who are unable to communicate with the outside world may experience emotional loneliness, which can lead to stress and, in extreme cases, serious mental illness. As a result, overcoming information handicap is a critical issue for visually and hearing-impaired people who want to live active, independent lives in society. The major objective of this study is to recognize Arabic speech in real time and convert it to Arabic text using Convolutional Neural Network-based algorithms before saving it to an SD card. The Arabic text is then translated into Arabic Braille characters, which are then used to control the Braille pattern via a Braille display with a solenoid drive. The Braille lettering triggered on the finger was deciphered by visually and hearing challenged participants who were proficient in Braille reading. The CNN, in combination with the ReLU model learning parameters, is fine-tuned for optimization, resulting in a model training accuracy of 90%. The tuned parameters model's testing results show that adding the ReLU activation function to the CNN model improves recognition accuracy by 84 % when speaking Arabic digits. Frontiers Media S.A. 2022-05-27 /pmc/articles/PMC9197259/ /pubmed/35712297 http://dx.doi.org/10.3389/fpubh.2022.898355 Text en Copyright © 2022 Bhatia, Devi, Alsuwailem and Mashat. https://creativecommons.org/licenses/by/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
spellingShingle Public Health
Bhatia, Surbhi
Devi, Ajantha
Alsuwailem, Razan Ibrahim
Mashat, Arwa
Convolutional Neural Network Based Real Time Arabic Speech Recognition to Arabic Braille for Hearing and Visually Impaired
title Convolutional Neural Network Based Real Time Arabic Speech Recognition to Arabic Braille for Hearing and Visually Impaired
title_full Convolutional Neural Network Based Real Time Arabic Speech Recognition to Arabic Braille for Hearing and Visually Impaired
title_fullStr Convolutional Neural Network Based Real Time Arabic Speech Recognition to Arabic Braille for Hearing and Visually Impaired
title_full_unstemmed Convolutional Neural Network Based Real Time Arabic Speech Recognition to Arabic Braille for Hearing and Visually Impaired
title_short Convolutional Neural Network Based Real Time Arabic Speech Recognition to Arabic Braille for Hearing and Visually Impaired
title_sort convolutional neural network based real time arabic speech recognition to arabic braille for hearing and visually impaired
topic Public Health
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9197259/
https://www.ncbi.nlm.nih.gov/pubmed/35712297
http://dx.doi.org/10.3389/fpubh.2022.898355
work_keys_str_mv AT bhatiasurbhi convolutionalneuralnetworkbasedrealtimearabicspeechrecognitiontoarabicbrailleforhearingandvisuallyimpaired
AT deviajantha convolutionalneuralnetworkbasedrealtimearabicspeechrecognitiontoarabicbrailleforhearingandvisuallyimpaired
AT alsuwailemrazanibrahim convolutionalneuralnetworkbasedrealtimearabicspeechrecognitiontoarabicbrailleforhearingandvisuallyimpaired
AT mashatarwa convolutionalneuralnetworkbasedrealtimearabicspeechrecognitiontoarabicbrailleforhearingandvisuallyimpaired