Cargando…
Human-Computer Interaction with Hand Gesture Recognition Using ResNet and MobileNet
Sign language is the native language of deaf people, which they use in their daily life, and it facilitates the communication process between deaf people. The problem faced by deaf people is targeted using sign language technique. Sign language refers to the use of the arms and hands to communicate,...
Autores principales: | , , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Hindawi
2022
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8976610/ https://www.ncbi.nlm.nih.gov/pubmed/35378817 http://dx.doi.org/10.1155/2022/8777355 |
_version_ | 1784680613432786944 |
---|---|
author | Alnuaim, Abeer Zakariah, Mohammed Hatamleh, Wesam Atef Tarazi, Hussam Tripathi, Vikas Amoatey, Enoch Tetteh |
author_facet | Alnuaim, Abeer Zakariah, Mohammed Hatamleh, Wesam Atef Tarazi, Hussam Tripathi, Vikas Amoatey, Enoch Tetteh |
author_sort | Alnuaim, Abeer |
collection | PubMed |
description | Sign language is the native language of deaf people, which they use in their daily life, and it facilitates the communication process between deaf people. The problem faced by deaf people is targeted using sign language technique. Sign language refers to the use of the arms and hands to communicate, particularly among those who are deaf. This varies depending on the person and the location from which they come. As a result, there is no standardization about the sign language to be used; for example, American, British, Chinese, and Arab sign languages are all distinct. Here, in this study we trained a model, which will be able to classify the Arabic sign language, which consists of 32 Arabic alphabet sign classes. In images, sign language is detected through the pose of the hand. In this study, we proposed a framework, which consists of two CNN models, and each of them is individually trained on the training set. The final predictions of the two models were ensembled to achieve higher results. The dataset used in this study is released in 2019 and is called as ArSL2018. It is launched at the Prince Mohammad Bin Fahd University, Al Khobar, Saudi Arabia. The main contribution in this study is resizing the images to 64 ∗ 64 pixels, converting from grayscale images to three-channel images, and then applying the median filter to the images, which acts as lowpass filtering in order to smooth the images and reduce noise and to make the model more robust to avoid overfitting. Then, the preprocessed image is fed into two different models, which are ResNet50 and MobileNetV2. ResNet50 and MobileNetV2 architectures were implemented together. The results we achieved on the test set for the whole data are with an accuracy of about 97% after applying many preprocessing techniques and different hyperparameters for each model, and also different data augmentation techniques. |
format | Online Article Text |
id | pubmed-8976610 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2022 |
publisher | Hindawi |
record_format | MEDLINE/PubMed |
spelling | pubmed-89766102022-04-03 Human-Computer Interaction with Hand Gesture Recognition Using ResNet and MobileNet Alnuaim, Abeer Zakariah, Mohammed Hatamleh, Wesam Atef Tarazi, Hussam Tripathi, Vikas Amoatey, Enoch Tetteh Comput Intell Neurosci Research Article Sign language is the native language of deaf people, which they use in their daily life, and it facilitates the communication process between deaf people. The problem faced by deaf people is targeted using sign language technique. Sign language refers to the use of the arms and hands to communicate, particularly among those who are deaf. This varies depending on the person and the location from which they come. As a result, there is no standardization about the sign language to be used; for example, American, British, Chinese, and Arab sign languages are all distinct. Here, in this study we trained a model, which will be able to classify the Arabic sign language, which consists of 32 Arabic alphabet sign classes. In images, sign language is detected through the pose of the hand. In this study, we proposed a framework, which consists of two CNN models, and each of them is individually trained on the training set. The final predictions of the two models were ensembled to achieve higher results. The dataset used in this study is released in 2019 and is called as ArSL2018. It is launched at the Prince Mohammad Bin Fahd University, Al Khobar, Saudi Arabia. The main contribution in this study is resizing the images to 64 ∗ 64 pixels, converting from grayscale images to three-channel images, and then applying the median filter to the images, which acts as lowpass filtering in order to smooth the images and reduce noise and to make the model more robust to avoid overfitting. Then, the preprocessed image is fed into two different models, which are ResNet50 and MobileNetV2. ResNet50 and MobileNetV2 architectures were implemented together. The results we achieved on the test set for the whole data are with an accuracy of about 97% after applying many preprocessing techniques and different hyperparameters for each model, and also different data augmentation techniques. Hindawi 2022-03-26 /pmc/articles/PMC8976610/ /pubmed/35378817 http://dx.doi.org/10.1155/2022/8777355 Text en Copyright © 2022 Abeer Alnuaim et al. https://creativecommons.org/licenses/by/4.0/This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. |
spellingShingle | Research Article Alnuaim, Abeer Zakariah, Mohammed Hatamleh, Wesam Atef Tarazi, Hussam Tripathi, Vikas Amoatey, Enoch Tetteh Human-Computer Interaction with Hand Gesture Recognition Using ResNet and MobileNet |
title | Human-Computer Interaction with Hand Gesture Recognition Using ResNet and MobileNet |
title_full | Human-Computer Interaction with Hand Gesture Recognition Using ResNet and MobileNet |
title_fullStr | Human-Computer Interaction with Hand Gesture Recognition Using ResNet and MobileNet |
title_full_unstemmed | Human-Computer Interaction with Hand Gesture Recognition Using ResNet and MobileNet |
title_short | Human-Computer Interaction with Hand Gesture Recognition Using ResNet and MobileNet |
title_sort | human-computer interaction with hand gesture recognition using resnet and mobilenet |
topic | Research Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8976610/ https://www.ncbi.nlm.nih.gov/pubmed/35378817 http://dx.doi.org/10.1155/2022/8777355 |
work_keys_str_mv | AT alnuaimabeer humancomputerinteractionwithhandgesturerecognitionusingresnetandmobilenet AT zakariahmohammed humancomputerinteractionwithhandgesturerecognitionusingresnetandmobilenet AT hatamlehwesamatef humancomputerinteractionwithhandgesturerecognitionusingresnetandmobilenet AT tarazihussam humancomputerinteractionwithhandgesturerecognitionusingresnetandmobilenet AT tripathivikas humancomputerinteractionwithhandgesturerecognitionusingresnetandmobilenet AT amoateyenochtetteh humancomputerinteractionwithhandgesturerecognitionusingresnetandmobilenet |