Cargando…

Sensor Fusion of Motion-Based Sign Language Interpretation with Deep Learning

Sign language was designed to allow hearing-impaired people to interact with others. Nonetheless, knowledge of sign language is uncommon in society, which leads to a communication barrier with the hearing-impaired community. Many studies of sign language recognition utilizing computer vision (CV) ha...

Descripción completa

Detalles Bibliográficos
Autores principales: Lee, Boon Giin, Chong, Teak-Wei, Chung, Wan-Young
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2020
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7663682/
https://www.ncbi.nlm.nih.gov/pubmed/33147891
http://dx.doi.org/10.3390/s20216256
_version_ 1783609683989233664
author Lee, Boon Giin
Chong, Teak-Wei
Chung, Wan-Young
author_facet Lee, Boon Giin
Chong, Teak-Wei
Chung, Wan-Young
author_sort Lee, Boon Giin
collection PubMed
description Sign language was designed to allow hearing-impaired people to interact with others. Nonetheless, knowledge of sign language is uncommon in society, which leads to a communication barrier with the hearing-impaired community. Many studies of sign language recognition utilizing computer vision (CV) have been conducted worldwide to reduce such barriers. However, this approach is restricted by the visual angle and highly affected by environmental factors. In addition, CV usually involves the use of machine learning, which requires collaboration of a team of experts and utilization of high-cost hardware utilities; this increases the application cost in real-world situations. Thus, this study aims to design and implement a smart wearable American Sign Language (ASL) interpretation system using deep learning, which applies sensor fusion that “fuses” six inertial measurement units (IMUs). The IMUs are attached to all fingertips and the back of the hand to recognize sign language gestures; thus, the proposed method is not restricted by the field of view. The study reveals that this model achieves an average recognition rate of 99.81% for dynamic ASL gestures. Moreover, the proposed ASL recognition system can be further integrated with ICT and IoT technology to provide a feasible solution to assist hearing-impaired people in communicating with others and improve their quality of life.
format Online
Article
Text
id pubmed-7663682
institution National Center for Biotechnology Information
language English
publishDate 2020
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-76636822020-11-14 Sensor Fusion of Motion-Based Sign Language Interpretation with Deep Learning Lee, Boon Giin Chong, Teak-Wei Chung, Wan-Young Sensors (Basel) Article Sign language was designed to allow hearing-impaired people to interact with others. Nonetheless, knowledge of sign language is uncommon in society, which leads to a communication barrier with the hearing-impaired community. Many studies of sign language recognition utilizing computer vision (CV) have been conducted worldwide to reduce such barriers. However, this approach is restricted by the visual angle and highly affected by environmental factors. In addition, CV usually involves the use of machine learning, which requires collaboration of a team of experts and utilization of high-cost hardware utilities; this increases the application cost in real-world situations. Thus, this study aims to design and implement a smart wearable American Sign Language (ASL) interpretation system using deep learning, which applies sensor fusion that “fuses” six inertial measurement units (IMUs). The IMUs are attached to all fingertips and the back of the hand to recognize sign language gestures; thus, the proposed method is not restricted by the field of view. The study reveals that this model achieves an average recognition rate of 99.81% for dynamic ASL gestures. Moreover, the proposed ASL recognition system can be further integrated with ICT and IoT technology to provide a feasible solution to assist hearing-impaired people in communicating with others and improve their quality of life. MDPI 2020-11-02 /pmc/articles/PMC7663682/ /pubmed/33147891 http://dx.doi.org/10.3390/s20216256 Text en © 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Lee, Boon Giin
Chong, Teak-Wei
Chung, Wan-Young
Sensor Fusion of Motion-Based Sign Language Interpretation with Deep Learning
title Sensor Fusion of Motion-Based Sign Language Interpretation with Deep Learning
title_full Sensor Fusion of Motion-Based Sign Language Interpretation with Deep Learning
title_fullStr Sensor Fusion of Motion-Based Sign Language Interpretation with Deep Learning
title_full_unstemmed Sensor Fusion of Motion-Based Sign Language Interpretation with Deep Learning
title_short Sensor Fusion of Motion-Based Sign Language Interpretation with Deep Learning
title_sort sensor fusion of motion-based sign language interpretation with deep learning
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7663682/
https://www.ncbi.nlm.nih.gov/pubmed/33147891
http://dx.doi.org/10.3390/s20216256
work_keys_str_mv AT leeboongiin sensorfusionofmotionbasedsignlanguageinterpretationwithdeeplearning
AT chongteakwei sensorfusionofmotionbasedsignlanguageinterpretationwithdeeplearning
AT chungwanyoung sensorfusionofmotionbasedsignlanguageinterpretationwithdeeplearning