Cargando…

Gaze-Based Intention Estimation for Shared Autonomy in Pick-and-Place Tasks

Shared autonomy aims at combining robotic and human control in the execution of remote, teleoperated tasks. This cooperative interaction cannot be brought about without the robot first recognizing the current human intention in a fast and reliable way so that a suitable assisting plan can be quickly...

Descripción completa

Detalles Bibliográficos
Autores principales: Fuchs, Stefan, Belardinelli, Anna
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Frontiers Media S.A. 2021
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8085393/
https://www.ncbi.nlm.nih.gov/pubmed/33935675
http://dx.doi.org/10.3389/fnbot.2021.647930
_version_ 1783686329962332160
author Fuchs, Stefan
Belardinelli, Anna
author_facet Fuchs, Stefan
Belardinelli, Anna
author_sort Fuchs, Stefan
collection PubMed
description Shared autonomy aims at combining robotic and human control in the execution of remote, teleoperated tasks. This cooperative interaction cannot be brought about without the robot first recognizing the current human intention in a fast and reliable way so that a suitable assisting plan can be quickly instantiated and executed. Eye movements have long been known to be highly predictive of the cognitive agenda unfolding during manual tasks and constitute, hence, the earliest and most reliable behavioral cues for intention estimation. In this study, we present an experiment aimed at analyzing human behavior in simple teleoperated pick-and-place tasks in a simulated scenario and at devising a suitable model for early estimation of the current proximal intention. We show that scan paths are, as expected, heavily shaped by the current intention and that two types of Gaussian Hidden Markov Models, one more scene-specific and one more action-specific, achieve a very good prediction performance, while also generalizing to new users and spatial arrangements. We finally discuss how behavioral and model results suggest that eye movements reflect to some extent the invariance and generality of higher-level planning across object configurations, which can be leveraged by cooperative robotic systems.
format Online
Article
Text
id pubmed-8085393
institution National Center for Biotechnology Information
language English
publishDate 2021
publisher Frontiers Media S.A.
record_format MEDLINE/PubMed
spelling pubmed-80853932021-05-01 Gaze-Based Intention Estimation for Shared Autonomy in Pick-and-Place Tasks Fuchs, Stefan Belardinelli, Anna Front Neurorobot Neuroscience Shared autonomy aims at combining robotic and human control in the execution of remote, teleoperated tasks. This cooperative interaction cannot be brought about without the robot first recognizing the current human intention in a fast and reliable way so that a suitable assisting plan can be quickly instantiated and executed. Eye movements have long been known to be highly predictive of the cognitive agenda unfolding during manual tasks and constitute, hence, the earliest and most reliable behavioral cues for intention estimation. In this study, we present an experiment aimed at analyzing human behavior in simple teleoperated pick-and-place tasks in a simulated scenario and at devising a suitable model for early estimation of the current proximal intention. We show that scan paths are, as expected, heavily shaped by the current intention and that two types of Gaussian Hidden Markov Models, one more scene-specific and one more action-specific, achieve a very good prediction performance, while also generalizing to new users and spatial arrangements. We finally discuss how behavioral and model results suggest that eye movements reflect to some extent the invariance and generality of higher-level planning across object configurations, which can be leveraged by cooperative robotic systems. Frontiers Media S.A. 2021-04-16 /pmc/articles/PMC8085393/ /pubmed/33935675 http://dx.doi.org/10.3389/fnbot.2021.647930 Text en Copyright © 2021 Fuchs and Belardinelli. https://creativecommons.org/licenses/by/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
spellingShingle Neuroscience
Fuchs, Stefan
Belardinelli, Anna
Gaze-Based Intention Estimation for Shared Autonomy in Pick-and-Place Tasks
title Gaze-Based Intention Estimation for Shared Autonomy in Pick-and-Place Tasks
title_full Gaze-Based Intention Estimation for Shared Autonomy in Pick-and-Place Tasks
title_fullStr Gaze-Based Intention Estimation for Shared Autonomy in Pick-and-Place Tasks
title_full_unstemmed Gaze-Based Intention Estimation for Shared Autonomy in Pick-and-Place Tasks
title_short Gaze-Based Intention Estimation for Shared Autonomy in Pick-and-Place Tasks
title_sort gaze-based intention estimation for shared autonomy in pick-and-place tasks
topic Neuroscience
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8085393/
https://www.ncbi.nlm.nih.gov/pubmed/33935675
http://dx.doi.org/10.3389/fnbot.2021.647930
work_keys_str_mv AT fuchsstefan gazebasedintentionestimationforsharedautonomyinpickandplacetasks
AT belardinellianna gazebasedintentionestimationforsharedautonomyinpickandplacetasks