Cargando…

A multimodal human-robot sign language interaction framework applied in social robots

Deaf-mutes face many difficulties in daily interactions with hearing people through spoken language. Sign language is an important way of expression and communication for deaf-mutes. Therefore, breaking the communication barrier between the deaf-mute and hearing communities is significant for facili...

Descripción completa

Detalles Bibliográficos
Autores principales: Li, Jie, Zhong, Junpei, Wang, Ning
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Frontiers Media S.A. 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10126358/
https://www.ncbi.nlm.nih.gov/pubmed/37113147
http://dx.doi.org/10.3389/fnins.2023.1168888
_version_ 1785030225965350912
author Li, Jie
Zhong, Junpei
Wang, Ning
author_facet Li, Jie
Zhong, Junpei
Wang, Ning
author_sort Li, Jie
collection PubMed
description Deaf-mutes face many difficulties in daily interactions with hearing people through spoken language. Sign language is an important way of expression and communication for deaf-mutes. Therefore, breaking the communication barrier between the deaf-mute and hearing communities is significant for facilitating their integration into society. To help them integrate into social life better, we propose a multimodal Chinese sign language (CSL) gesture interaction framework based on social robots. The CSL gesture information including both static and dynamic gestures is captured from two different modal sensors. A wearable Myo armband and a Leap Motion sensor are used to collect human arm surface electromyography (sEMG) signals and hand 3D vectors, respectively. Two modalities of gesture datasets are preprocessed and fused to improve the recognition accuracy and to reduce the processing time cost of the network before sending it to the classifier. Since the input datasets of the proposed framework are temporal sequence gestures, the long-short term memory recurrent neural network is used to classify these input sequences. Comparative experiments are performed on an NAO robot to test our method. Moreover, our method can effectively improve CSL gesture recognition accuracy, which has potential applications in a variety of gesture interaction scenarios not only in social robots.
format Online
Article
Text
id pubmed-10126358
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher Frontiers Media S.A.
record_format MEDLINE/PubMed
spelling pubmed-101263582023-04-26 A multimodal human-robot sign language interaction framework applied in social robots Li, Jie Zhong, Junpei Wang, Ning Front Neurosci Neuroscience Deaf-mutes face many difficulties in daily interactions with hearing people through spoken language. Sign language is an important way of expression and communication for deaf-mutes. Therefore, breaking the communication barrier between the deaf-mute and hearing communities is significant for facilitating their integration into society. To help them integrate into social life better, we propose a multimodal Chinese sign language (CSL) gesture interaction framework based on social robots. The CSL gesture information including both static and dynamic gestures is captured from two different modal sensors. A wearable Myo armband and a Leap Motion sensor are used to collect human arm surface electromyography (sEMG) signals and hand 3D vectors, respectively. Two modalities of gesture datasets are preprocessed and fused to improve the recognition accuracy and to reduce the processing time cost of the network before sending it to the classifier. Since the input datasets of the proposed framework are temporal sequence gestures, the long-short term memory recurrent neural network is used to classify these input sequences. Comparative experiments are performed on an NAO robot to test our method. Moreover, our method can effectively improve CSL gesture recognition accuracy, which has potential applications in a variety of gesture interaction scenarios not only in social robots. Frontiers Media S.A. 2023-04-11 /pmc/articles/PMC10126358/ /pubmed/37113147 http://dx.doi.org/10.3389/fnins.2023.1168888 Text en Copyright © 2023 Li, Zhong and Wang. https://creativecommons.org/licenses/by/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
spellingShingle Neuroscience
Li, Jie
Zhong, Junpei
Wang, Ning
A multimodal human-robot sign language interaction framework applied in social robots
title A multimodal human-robot sign language interaction framework applied in social robots
title_full A multimodal human-robot sign language interaction framework applied in social robots
title_fullStr A multimodal human-robot sign language interaction framework applied in social robots
title_full_unstemmed A multimodal human-robot sign language interaction framework applied in social robots
title_short A multimodal human-robot sign language interaction framework applied in social robots
title_sort multimodal human-robot sign language interaction framework applied in social robots
topic Neuroscience
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10126358/
https://www.ncbi.nlm.nih.gov/pubmed/37113147
http://dx.doi.org/10.3389/fnins.2023.1168888
work_keys_str_mv AT lijie amultimodalhumanrobotsignlanguageinteractionframeworkappliedinsocialrobots
AT zhongjunpei amultimodalhumanrobotsignlanguageinteractionframeworkappliedinsocialrobots
AT wangning amultimodalhumanrobotsignlanguageinteractionframeworkappliedinsocialrobots
AT lijie multimodalhumanrobotsignlanguageinteractionframeworkappliedinsocialrobots
AT zhongjunpei multimodalhumanrobotsignlanguageinteractionframeworkappliedinsocialrobots
AT wangning multimodalhumanrobotsignlanguageinteractionframeworkappliedinsocialrobots