Cargando…

O(2)A: One-Shot Observational Learning with Action Vectors

We present O(2)A, a novel method for learning to perform robotic manipulation tasks from a single (one-shot) third-person demonstration video. To our knowledge, it is the first time this has been done for a single demonstration. The key novelty lies in pre-training a feature extractor for creating a...

Descripción completa

Detalles Bibliográficos
Autores principales: Pauly, Leo, Agboh , Wisdom C., Hogg , David C., Fuentes , Raul
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Frontiers Media S.A. 2021
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8367442/
https://www.ncbi.nlm.nih.gov/pubmed/34409071
http://dx.doi.org/10.3389/frobt.2021.686368
_version_ 1783739060095811584
author Pauly, Leo
Agboh , Wisdom C.
Hogg , David C.
Fuentes , Raul
author_facet Pauly, Leo
Agboh , Wisdom C.
Hogg , David C.
Fuentes , Raul
author_sort Pauly, Leo
collection PubMed
description We present O(2)A, a novel method for learning to perform robotic manipulation tasks from a single (one-shot) third-person demonstration video. To our knowledge, it is the first time this has been done for a single demonstration. The key novelty lies in pre-training a feature extractor for creating a perceptual representation for actions that we call “action vectors”. The action vectors are extracted using a 3D-CNN model pre-trained as an action classifier on a generic action dataset. The distance between the action vectors from the observed third-person demonstration and trial robot executions is used as a reward for reinforcement learning of the demonstrated task. We report on experiments in simulation and on a real robot, with changes in viewpoint of observation, properties of the objects involved, scene background and morphology of the manipulator between the demonstration and the learning domains. O(2)A outperforms baseline approaches under different domain shifts and has comparable performance with an Oracle (that uses an ideal reward function). Videos of the results, including demonstrations, can be found in our: project-website.
format Online
Article
Text
id pubmed-8367442
institution National Center for Biotechnology Information
language English
publishDate 2021
publisher Frontiers Media S.A.
record_format MEDLINE/PubMed
spelling pubmed-83674422021-08-17 O(2)A: One-Shot Observational Learning with Action Vectors Pauly, Leo Agboh , Wisdom C. Hogg , David C. Fuentes , Raul Front Robot AI Robotics and AI We present O(2)A, a novel method for learning to perform robotic manipulation tasks from a single (one-shot) third-person demonstration video. To our knowledge, it is the first time this has been done for a single demonstration. The key novelty lies in pre-training a feature extractor for creating a perceptual representation for actions that we call “action vectors”. The action vectors are extracted using a 3D-CNN model pre-trained as an action classifier on a generic action dataset. The distance between the action vectors from the observed third-person demonstration and trial robot executions is used as a reward for reinforcement learning of the demonstrated task. We report on experiments in simulation and on a real robot, with changes in viewpoint of observation, properties of the objects involved, scene background and morphology of the manipulator between the demonstration and the learning domains. O(2)A outperforms baseline approaches under different domain shifts and has comparable performance with an Oracle (that uses an ideal reward function). Videos of the results, including demonstrations, can be found in our: project-website. Frontiers Media S.A. 2021-08-02 /pmc/articles/PMC8367442/ /pubmed/34409071 http://dx.doi.org/10.3389/frobt.2021.686368 Text en Copyright © 2021 Pauly, Agboh , Hogg  and Fuentes . https://creativecommons.org/licenses/by/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
spellingShingle Robotics and AI
Pauly, Leo
Agboh , Wisdom C.
Hogg , David C.
Fuentes , Raul
O(2)A: One-Shot Observational Learning with Action Vectors
title O(2)A: One-Shot Observational Learning with Action Vectors
title_full O(2)A: One-Shot Observational Learning with Action Vectors
title_fullStr O(2)A: One-Shot Observational Learning with Action Vectors
title_full_unstemmed O(2)A: One-Shot Observational Learning with Action Vectors
title_short O(2)A: One-Shot Observational Learning with Action Vectors
title_sort o(2)a: one-shot observational learning with action vectors
topic Robotics and AI
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8367442/
https://www.ncbi.nlm.nih.gov/pubmed/34409071
http://dx.doi.org/10.3389/frobt.2021.686368
work_keys_str_mv AT paulyleo o2aoneshotobservationallearningwithactionvectors
AT agbohwisdomc o2aoneshotobservationallearningwithactionvectors
AT hoggdavidc o2aoneshotobservationallearningwithactionvectors
AT fuentesraul o2aoneshotobservationallearningwithactionvectors