Cargando…

Learning latent actions to control assistive robots

Assistive robot arms enable people with disabilities to conduct everyday tasks on their own. These arms are dexterous and high-dimensional; however, the interfaces people must use to control their robots are low-dimensional. Consider teleoperating a 7-DoF robot arm with a 2-DoF joystick. The robot i...

Descripción completa

Detalles Bibliográficos
Autores principales: Losey, Dylan P., Jeon, Hong Jun, Li, Mengxi, Srinivasan, Krishnan, Mandlekar, Ajay, Garg, Animesh, Bohg, Jeannette, Sadigh, Dorsa
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Springer US 2021
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8335729/
https://www.ncbi.nlm.nih.gov/pubmed/34366568
http://dx.doi.org/10.1007/s10514-021-10005-w
_version_ 1783733182352326656
author Losey, Dylan P.
Jeon, Hong Jun
Li, Mengxi
Srinivasan, Krishnan
Mandlekar, Ajay
Garg, Animesh
Bohg, Jeannette
Sadigh, Dorsa
author_facet Losey, Dylan P.
Jeon, Hong Jun
Li, Mengxi
Srinivasan, Krishnan
Mandlekar, Ajay
Garg, Animesh
Bohg, Jeannette
Sadigh, Dorsa
author_sort Losey, Dylan P.
collection PubMed
description Assistive robot arms enable people with disabilities to conduct everyday tasks on their own. These arms are dexterous and high-dimensional; however, the interfaces people must use to control their robots are low-dimensional. Consider teleoperating a 7-DoF robot arm with a 2-DoF joystick. The robot is helping you eat dinner, and currently you want to cut a piece of tofu. Today’s robots assume a pre-defined mapping between joystick inputs and robot actions: in one mode the joystick controls the robot’s motion in the x–y plane, in another mode the joystick controls the robot’s z–yaw motion, and so on. But this mapping misses out on the task you are trying to perform! Ideally, one joystick axis should control how the robot stabs the tofu, and the other axis should control different cutting motions. Our insight is that we can achieve intuitive, user-friendly control of assistive robots by embedding the robot’s high-dimensional actions into low-dimensional and human-controllable latent actions. We divide this process into three parts. First, we explore models for learning latent actions from offline task demonstrations, and formalize the properties that latent actions should satisfy. Next, we combine learned latent actions with autonomous robot assistance to help the user reach and maintain their high-level goals. Finally, we learn a personalized alignment model between joystick inputs and latent actions. We evaluate our resulting approach in four user studies where non-disabled participants reach marshmallows, cook apple pie, cut tofu, and assemble dessert. We then test our approach with two disabled adults who leverage assistive devices on a daily basis.
format Online
Article
Text
id pubmed-8335729
institution National Center for Biotechnology Information
language English
publishDate 2021
publisher Springer US
record_format MEDLINE/PubMed
spelling pubmed-83357292021-08-04 Learning latent actions to control assistive robots Losey, Dylan P. Jeon, Hong Jun Li, Mengxi Srinivasan, Krishnan Mandlekar, Ajay Garg, Animesh Bohg, Jeannette Sadigh, Dorsa Auton Robots Article Assistive robot arms enable people with disabilities to conduct everyday tasks on their own. These arms are dexterous and high-dimensional; however, the interfaces people must use to control their robots are low-dimensional. Consider teleoperating a 7-DoF robot arm with a 2-DoF joystick. The robot is helping you eat dinner, and currently you want to cut a piece of tofu. Today’s robots assume a pre-defined mapping between joystick inputs and robot actions: in one mode the joystick controls the robot’s motion in the x–y plane, in another mode the joystick controls the robot’s z–yaw motion, and so on. But this mapping misses out on the task you are trying to perform! Ideally, one joystick axis should control how the robot stabs the tofu, and the other axis should control different cutting motions. Our insight is that we can achieve intuitive, user-friendly control of assistive robots by embedding the robot’s high-dimensional actions into low-dimensional and human-controllable latent actions. We divide this process into three parts. First, we explore models for learning latent actions from offline task demonstrations, and formalize the properties that latent actions should satisfy. Next, we combine learned latent actions with autonomous robot assistance to help the user reach and maintain their high-level goals. Finally, we learn a personalized alignment model between joystick inputs and latent actions. We evaluate our resulting approach in four user studies where non-disabled participants reach marshmallows, cook apple pie, cut tofu, and assemble dessert. We then test our approach with two disabled adults who leverage assistive devices on a daily basis. Springer US 2021-08-04 2022 /pmc/articles/PMC8335729/ /pubmed/34366568 http://dx.doi.org/10.1007/s10514-021-10005-w Text en © The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2021 This article is made available via the PMC Open Access Subset for unrestricted research re-use and secondary analysis in any form or by any means with acknowledgement of the original source. These permissions are granted for the duration of the World Health Organization (WHO) declaration of COVID-19 as a global pandemic.
spellingShingle Article
Losey, Dylan P.
Jeon, Hong Jun
Li, Mengxi
Srinivasan, Krishnan
Mandlekar, Ajay
Garg, Animesh
Bohg, Jeannette
Sadigh, Dorsa
Learning latent actions to control assistive robots
title Learning latent actions to control assistive robots
title_full Learning latent actions to control assistive robots
title_fullStr Learning latent actions to control assistive robots
title_full_unstemmed Learning latent actions to control assistive robots
title_short Learning latent actions to control assistive robots
title_sort learning latent actions to control assistive robots
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8335729/
https://www.ncbi.nlm.nih.gov/pubmed/34366568
http://dx.doi.org/10.1007/s10514-021-10005-w
work_keys_str_mv AT loseydylanp learninglatentactionstocontrolassistiverobots
AT jeonhongjun learninglatentactionstocontrolassistiverobots
AT limengxi learninglatentactionstocontrolassistiverobots
AT srinivasankrishnan learninglatentactionstocontrolassistiverobots
AT mandlekarajay learninglatentactionstocontrolassistiverobots
AT garganimesh learninglatentactionstocontrolassistiverobots
AT bohgjeannette learninglatentactionstocontrolassistiverobots
AT sadighdorsa learninglatentactionstocontrolassistiverobots