Cargando…
American Sign Language Alphabet Recognition by Extracting Feature from Hand Pose Estimation
Sign language is designed to assist the deaf and hard of hearing community to convey messages and connect with society. Sign language recognition has been an important domain of research for a long time. Previously, sensor-based approaches have obtained higher accuracy than vision-based approaches....
Autores principales: | , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
MDPI
2021
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8434249/ https://www.ncbi.nlm.nih.gov/pubmed/34502747 http://dx.doi.org/10.3390/s21175856 |
_version_ | 1783751553430061056 |
---|---|
author | Shin, Jungpil Matsuoka, Akitaka Hasan, Md. Al Mehedi Srizon, Azmain Yakin |
author_facet | Shin, Jungpil Matsuoka, Akitaka Hasan, Md. Al Mehedi Srizon, Azmain Yakin |
author_sort | Shin, Jungpil |
collection | PubMed |
description | Sign language is designed to assist the deaf and hard of hearing community to convey messages and connect with society. Sign language recognition has been an important domain of research for a long time. Previously, sensor-based approaches have obtained higher accuracy than vision-based approaches. Due to the cost-effectiveness of vision-based approaches, researchers have been conducted here also despite the accuracy drop. The purpose of this research is to recognize American sign characters using hand images obtained from a web camera. In this work, the media-pipe hands algorithm was used for estimating hand joints from RGB images of hands obtained from a web camera and two types of features were generated from the estimated coordinates of the joints obtained for classification: one is the distances between the joint points and the other one is the angles between vectors and 3D axes. The classifiers utilized to classify the characters were support vector machine (SVM) and light gradient boosting machine (GBM). Three character datasets were used for recognition: the ASL Alphabet dataset, the Massey dataset, and the finger spelling A dataset. The results obtained were 99.39% for the Massey dataset, 87.60% for the ASL Alphabet dataset, and 98.45% for Finger Spelling A dataset. The proposed design for automatic American sign language recognition is cost-effective, computationally inexpensive, does not require any special sensors or devices, and has outperformed previous studies. |
format | Online Article Text |
id | pubmed-8434249 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2021 |
publisher | MDPI |
record_format | MEDLINE/PubMed |
spelling | pubmed-84342492021-09-12 American Sign Language Alphabet Recognition by Extracting Feature from Hand Pose Estimation Shin, Jungpil Matsuoka, Akitaka Hasan, Md. Al Mehedi Srizon, Azmain Yakin Sensors (Basel) Article Sign language is designed to assist the deaf and hard of hearing community to convey messages and connect with society. Sign language recognition has been an important domain of research for a long time. Previously, sensor-based approaches have obtained higher accuracy than vision-based approaches. Due to the cost-effectiveness of vision-based approaches, researchers have been conducted here also despite the accuracy drop. The purpose of this research is to recognize American sign characters using hand images obtained from a web camera. In this work, the media-pipe hands algorithm was used for estimating hand joints from RGB images of hands obtained from a web camera and two types of features were generated from the estimated coordinates of the joints obtained for classification: one is the distances between the joint points and the other one is the angles between vectors and 3D axes. The classifiers utilized to classify the characters were support vector machine (SVM) and light gradient boosting machine (GBM). Three character datasets were used for recognition: the ASL Alphabet dataset, the Massey dataset, and the finger spelling A dataset. The results obtained were 99.39% for the Massey dataset, 87.60% for the ASL Alphabet dataset, and 98.45% for Finger Spelling A dataset. The proposed design for automatic American sign language recognition is cost-effective, computationally inexpensive, does not require any special sensors or devices, and has outperformed previous studies. MDPI 2021-08-31 /pmc/articles/PMC8434249/ /pubmed/34502747 http://dx.doi.org/10.3390/s21175856 Text en © 2021 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). |
spellingShingle | Article Shin, Jungpil Matsuoka, Akitaka Hasan, Md. Al Mehedi Srizon, Azmain Yakin American Sign Language Alphabet Recognition by Extracting Feature from Hand Pose Estimation |
title | American Sign Language Alphabet Recognition by Extracting Feature from Hand Pose Estimation |
title_full | American Sign Language Alphabet Recognition by Extracting Feature from Hand Pose Estimation |
title_fullStr | American Sign Language Alphabet Recognition by Extracting Feature from Hand Pose Estimation |
title_full_unstemmed | American Sign Language Alphabet Recognition by Extracting Feature from Hand Pose Estimation |
title_short | American Sign Language Alphabet Recognition by Extracting Feature from Hand Pose Estimation |
title_sort | american sign language alphabet recognition by extracting feature from hand pose estimation |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8434249/ https://www.ncbi.nlm.nih.gov/pubmed/34502747 http://dx.doi.org/10.3390/s21175856 |
work_keys_str_mv | AT shinjungpil americansignlanguagealphabetrecognitionbyextractingfeaturefromhandposeestimation AT matsuokaakitaka americansignlanguagealphabetrecognitionbyextractingfeaturefromhandposeestimation AT hasanmdalmehedi americansignlanguagealphabetrecognitionbyextractingfeaturefromhandposeestimation AT srizonazmainyakin americansignlanguagealphabetrecognitionbyextractingfeaturefromhandposeestimation |