Cargando…

Deep Learning Technology to Recognize American Sign Language Alphabet

Historically, individuals with hearing impairments have faced neglect, lacking the necessary tools to facilitate effective communication. However, advancements in modern technology have paved the way for the development of various tools and software aimed at improving the quality of life for hearing...

Descripción completa

Detalles Bibliográficos
Autores principales: Alsharif, Bader, Altaher, Ali Salem, Altaher, Ahmed, Ilyas, Mohammad, Alalwany, Easa
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10535774/
https://www.ncbi.nlm.nih.gov/pubmed/37766026
http://dx.doi.org/10.3390/s23187970
_version_ 1785112710521815040
author Alsharif, Bader
Altaher, Ali Salem
Altaher, Ahmed
Ilyas, Mohammad
Alalwany, Easa
author_facet Alsharif, Bader
Altaher, Ali Salem
Altaher, Ahmed
Ilyas, Mohammad
Alalwany, Easa
author_sort Alsharif, Bader
collection PubMed
description Historically, individuals with hearing impairments have faced neglect, lacking the necessary tools to facilitate effective communication. However, advancements in modern technology have paved the way for the development of various tools and software aimed at improving the quality of life for hearing-disabled individuals. This research paper presents a comprehensive study employing five distinct deep learning models to recognize hand gestures for the American Sign Language (ASL) alphabet. The primary objective of this study was to leverage contemporary technology to bridge the communication gap between hearing-impaired individuals and individuals with no hearing impairment. The models utilized in this research include AlexNet, ConvNeXt, EfficientNet, ResNet-50, and VisionTransformer were trained and tested using an extensive dataset comprising over 87,000 images of the ASL alphabet hand gestures. Numerous experiments were conducted, involving modifications to the architectural design parameters of the models to obtain maximum recognition accuracy. The experimental results of our study revealed that ResNet-50 achieved an exceptional accuracy rate of 99.98%, the highest among all models. EfficientNet attained an accuracy rate of 99.95%, ConvNeXt achieved 99.51% accuracy, AlexNet attained 99.50% accuracy, while VisionTransformer yielded the lowest accuracy of 88.59%.
format Online
Article
Text
id pubmed-10535774
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-105357742023-09-29 Deep Learning Technology to Recognize American Sign Language Alphabet Alsharif, Bader Altaher, Ali Salem Altaher, Ahmed Ilyas, Mohammad Alalwany, Easa Sensors (Basel) Article Historically, individuals with hearing impairments have faced neglect, lacking the necessary tools to facilitate effective communication. However, advancements in modern technology have paved the way for the development of various tools and software aimed at improving the quality of life for hearing-disabled individuals. This research paper presents a comprehensive study employing five distinct deep learning models to recognize hand gestures for the American Sign Language (ASL) alphabet. The primary objective of this study was to leverage contemporary technology to bridge the communication gap between hearing-impaired individuals and individuals with no hearing impairment. The models utilized in this research include AlexNet, ConvNeXt, EfficientNet, ResNet-50, and VisionTransformer were trained and tested using an extensive dataset comprising over 87,000 images of the ASL alphabet hand gestures. Numerous experiments were conducted, involving modifications to the architectural design parameters of the models to obtain maximum recognition accuracy. The experimental results of our study revealed that ResNet-50 achieved an exceptional accuracy rate of 99.98%, the highest among all models. EfficientNet attained an accuracy rate of 99.95%, ConvNeXt achieved 99.51% accuracy, AlexNet attained 99.50% accuracy, while VisionTransformer yielded the lowest accuracy of 88.59%. MDPI 2023-09-19 /pmc/articles/PMC10535774/ /pubmed/37766026 http://dx.doi.org/10.3390/s23187970 Text en © 2023 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Alsharif, Bader
Altaher, Ali Salem
Altaher, Ahmed
Ilyas, Mohammad
Alalwany, Easa
Deep Learning Technology to Recognize American Sign Language Alphabet
title Deep Learning Technology to Recognize American Sign Language Alphabet
title_full Deep Learning Technology to Recognize American Sign Language Alphabet
title_fullStr Deep Learning Technology to Recognize American Sign Language Alphabet
title_full_unstemmed Deep Learning Technology to Recognize American Sign Language Alphabet
title_short Deep Learning Technology to Recognize American Sign Language Alphabet
title_sort deep learning technology to recognize american sign language alphabet
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10535774/
https://www.ncbi.nlm.nih.gov/pubmed/37766026
http://dx.doi.org/10.3390/s23187970
work_keys_str_mv AT alsharifbader deeplearningtechnologytorecognizeamericansignlanguagealphabet
AT altaheralisalem deeplearningtechnologytorecognizeamericansignlanguagealphabet
AT altaherahmed deeplearningtechnologytorecognizeamericansignlanguagealphabet
AT ilyasmohammad deeplearningtechnologytorecognizeamericansignlanguagealphabet
AT alalwanyeasa deeplearningtechnologytorecognizeamericansignlanguagealphabet