Cargando…
Learning Semantics of Gestural Instructions for Human-Robot Collaboration
Designed to work safely alongside humans, collaborative robots need to be capable partners in human-robot teams. Besides having key capabilities like detecting gestures, recognizing objects, grasping them, and handing them over, these robots need to seamlessly adapt their behavior for efficient huma...
Autores principales: | , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Frontiers Media S.A.
2018
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5868127/ https://www.ncbi.nlm.nih.gov/pubmed/29615888 http://dx.doi.org/10.3389/fnbot.2018.00007 |
_version_ | 1783309094444072960 |
---|---|
author | Shukla, Dadhichi Erkent, Özgür Piater, Justus |
author_facet | Shukla, Dadhichi Erkent, Özgür Piater, Justus |
author_sort | Shukla, Dadhichi |
collection | PubMed |
description | Designed to work safely alongside humans, collaborative robots need to be capable partners in human-robot teams. Besides having key capabilities like detecting gestures, recognizing objects, grasping them, and handing them over, these robots need to seamlessly adapt their behavior for efficient human-robot collaboration. In this context we present the fast, supervised Proactive Incremental Learning (PIL) framework for learning associations between human hand gestures and the intended robotic manipulation actions. With the proactive aspect, the robot is competent to predict the human's intent and perform an action without waiting for an instruction. The incremental aspect enables the robot to learn associations on the fly while performing a task. It is a probabilistic, statistically-driven approach. As a proof of concept, we focus on a table assembly task where the robot assists its human partner. We investigate how the accuracy of gesture detection affects the number of interactions required to complete the task. We also conducted a human-robot interaction study with non-roboticist users comparing a proactive with a reactive robot that waits for instructions. |
format | Online Article Text |
id | pubmed-5868127 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2018 |
publisher | Frontiers Media S.A. |
record_format | MEDLINE/PubMed |
spelling | pubmed-58681272018-04-03 Learning Semantics of Gestural Instructions for Human-Robot Collaboration Shukla, Dadhichi Erkent, Özgür Piater, Justus Front Neurorobot Neuroscience Designed to work safely alongside humans, collaborative robots need to be capable partners in human-robot teams. Besides having key capabilities like detecting gestures, recognizing objects, grasping them, and handing them over, these robots need to seamlessly adapt their behavior for efficient human-robot collaboration. In this context we present the fast, supervised Proactive Incremental Learning (PIL) framework for learning associations between human hand gestures and the intended robotic manipulation actions. With the proactive aspect, the robot is competent to predict the human's intent and perform an action without waiting for an instruction. The incremental aspect enables the robot to learn associations on the fly while performing a task. It is a probabilistic, statistically-driven approach. As a proof of concept, we focus on a table assembly task where the robot assists its human partner. We investigate how the accuracy of gesture detection affects the number of interactions required to complete the task. We also conducted a human-robot interaction study with non-roboticist users comparing a proactive with a reactive robot that waits for instructions. Frontiers Media S.A. 2018-03-19 /pmc/articles/PMC5868127/ /pubmed/29615888 http://dx.doi.org/10.3389/fnbot.2018.00007 Text en Copyright © 2018 Shukla, Erkent and Piater. http://creativecommons.org/licenses/by/4.0/ This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms. |
spellingShingle | Neuroscience Shukla, Dadhichi Erkent, Özgür Piater, Justus Learning Semantics of Gestural Instructions for Human-Robot Collaboration |
title | Learning Semantics of Gestural Instructions for Human-Robot Collaboration |
title_full | Learning Semantics of Gestural Instructions for Human-Robot Collaboration |
title_fullStr | Learning Semantics of Gestural Instructions for Human-Robot Collaboration |
title_full_unstemmed | Learning Semantics of Gestural Instructions for Human-Robot Collaboration |
title_short | Learning Semantics of Gestural Instructions for Human-Robot Collaboration |
title_sort | learning semantics of gestural instructions for human-robot collaboration |
topic | Neuroscience |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5868127/ https://www.ncbi.nlm.nih.gov/pubmed/29615888 http://dx.doi.org/10.3389/fnbot.2018.00007 |
work_keys_str_mv | AT shukladadhichi learningsemanticsofgesturalinstructionsforhumanrobotcollaboration AT erkentozgur learningsemanticsofgesturalinstructionsforhumanrobotcollaboration AT piaterjustus learningsemanticsofgesturalinstructionsforhumanrobotcollaboration |