Cargando…
One-shot Learning from Demonstration Approach Toward a Reciprocal Sign Language-based HRI
This paper addresses the lack of proper Learning from Demonstration (LfD) architectures for Sign Language-based Human–Robot Interactions to make them more extensible. The paper proposes and implements a Learning from Demonstration structure for teaching new Iranian Sign Language signs to a teacher a...
Autores principales: | , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Springer Netherlands
2021
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8352758/ https://www.ncbi.nlm.nih.gov/pubmed/34394771 http://dx.doi.org/10.1007/s12369-021-00818-1 |
_version_ | 1783736253982703616 |
---|---|
author | Hosseini, Seyed Ramezan Taheri, Alireza Alemi, Minoo Meghdari, Ali |
author_facet | Hosseini, Seyed Ramezan Taheri, Alireza Alemi, Minoo Meghdari, Ali |
author_sort | Hosseini, Seyed Ramezan |
collection | PubMed |
description | This paper addresses the lack of proper Learning from Demonstration (LfD) architectures for Sign Language-based Human–Robot Interactions to make them more extensible. The paper proposes and implements a Learning from Demonstration structure for teaching new Iranian Sign Language signs to a teacher assistant social robot, RASA. This LfD architecture utilizes one-shot learning techniques and Convolutional Neural Network to learn to recognize and imitate a sign after seeing its demonstration (using a data glove) just once. Despite using a small, low diversity data set (~ 500 signs in 16 categories), the recognition module reached a promising 4-way accuracy of 70% on the test data and showed good potential for increasing the extensibility of sign vocabulary in sign language-based human–robot interactions. The expansibility and promising results of the one-shot Learning from Demonstration technique in this study are the main achievements of conducting such machine learning algorithms in social Human–Robot Interaction. |
format | Online Article Text |
id | pubmed-8352758 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2021 |
publisher | Springer Netherlands |
record_format | MEDLINE/PubMed |
spelling | pubmed-83527582021-08-10 One-shot Learning from Demonstration Approach Toward a Reciprocal Sign Language-based HRI Hosseini, Seyed Ramezan Taheri, Alireza Alemi, Minoo Meghdari, Ali Int J Soc Robot Article This paper addresses the lack of proper Learning from Demonstration (LfD) architectures for Sign Language-based Human–Robot Interactions to make them more extensible. The paper proposes and implements a Learning from Demonstration structure for teaching new Iranian Sign Language signs to a teacher assistant social robot, RASA. This LfD architecture utilizes one-shot learning techniques and Convolutional Neural Network to learn to recognize and imitate a sign after seeing its demonstration (using a data glove) just once. Despite using a small, low diversity data set (~ 500 signs in 16 categories), the recognition module reached a promising 4-way accuracy of 70% on the test data and showed good potential for increasing the extensibility of sign vocabulary in sign language-based human–robot interactions. The expansibility and promising results of the one-shot Learning from Demonstration technique in this study are the main achievements of conducting such machine learning algorithms in social Human–Robot Interaction. Springer Netherlands 2021-08-10 /pmc/articles/PMC8352758/ /pubmed/34394771 http://dx.doi.org/10.1007/s12369-021-00818-1 Text en © The Author(s), under exclusive licence to Springer Nature B.V. 2021 This article is made available via the PMC Open Access Subset for unrestricted research re-use and secondary analysis in any form or by any means with acknowledgement of the original source. These permissions are granted for the duration of the World Health Organization (WHO) declaration of COVID-19 as a global pandemic. |
spellingShingle | Article Hosseini, Seyed Ramezan Taheri, Alireza Alemi, Minoo Meghdari, Ali One-shot Learning from Demonstration Approach Toward a Reciprocal Sign Language-based HRI |
title | One-shot Learning from Demonstration Approach Toward a Reciprocal Sign Language-based HRI |
title_full | One-shot Learning from Demonstration Approach Toward a Reciprocal Sign Language-based HRI |
title_fullStr | One-shot Learning from Demonstration Approach Toward a Reciprocal Sign Language-based HRI |
title_full_unstemmed | One-shot Learning from Demonstration Approach Toward a Reciprocal Sign Language-based HRI |
title_short | One-shot Learning from Demonstration Approach Toward a Reciprocal Sign Language-based HRI |
title_sort | one-shot learning from demonstration approach toward a reciprocal sign language-based hri |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8352758/ https://www.ncbi.nlm.nih.gov/pubmed/34394771 http://dx.doi.org/10.1007/s12369-021-00818-1 |
work_keys_str_mv | AT hosseiniseyedramezan oneshotlearningfromdemonstrationapproachtowardareciprocalsignlanguagebasedhri AT taherialireza oneshotlearningfromdemonstrationapproachtowardareciprocalsignlanguagebasedhri AT alemiminoo oneshotlearningfromdemonstrationapproachtowardareciprocalsignlanguagebasedhri AT meghdariali oneshotlearningfromdemonstrationapproachtowardareciprocalsignlanguagebasedhri |