Cargando…

CNN-LSTM Hybrid Real-Time IoT-Based Cognitive Approaches for ISLR with WebRTC: Auditory Impaired Assistive Technology

In the era of modern technology, people may readily communicate through facial expressions, body language, and other means. As the use of the Internet evolves, it may be a boon to the medical fields. Recently, the Internet of Medical Things (IoMT) has provided a broader platform to handle difficulti...

Descripción completa

Detalles Bibliográficos
Autores principales: Gupta, Meenu, Thakur, Narina, Bansal, Dhruvi, Chaudhary, Gopal, Davaasambuu, Battulga, Hua, Qiaozhi
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Hindawi 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8885272/
https://www.ncbi.nlm.nih.gov/pubmed/35237390
http://dx.doi.org/10.1155/2022/3978627
Descripción
Sumario:In the era of modern technology, people may readily communicate through facial expressions, body language, and other means. As the use of the Internet evolves, it may be a boon to the medical fields. Recently, the Internet of Medical Things (IoMT) has provided a broader platform to handle difficulties linked to healthcare, including people's listening and hearing impairment. Although there are many translators that exist to help people of various linguistic backgrounds communicate more effectively. Using kinesics linguistics, one may assess or comprehend the communications of auditory and hearing-impaired persons who are standing next to each other. When looking at the present COVID-19 scenario, individuals are still linked in some way via online platforms; however, persons with disabilities have communication challenges with online platforms. The work provided in this research serves as a communication bridge inside the challenged community and the rest of the globe. The proposed work for Indian Sign Linguistic Recognition (ISLR) uses three-dimensional convolutional neural networks (3D-CNNs) and long short-term memory (LSTM) technique for analysis. A conventional hand gesture recognition system involves identifying the hand and its location or orientation, extracting certain essential features and applying an appropriate machine learning algorithm to recognise the completed action. In the calling interface of the web application, WebRTC has been implemented. A teleprompting technology is also used in the web app, which transforms sign language into audible sound. The proposed web app's average recognition rate is 97.21%.