Cargando…

Interactive and incremental learning of spatial object relations from human demonstrations

Humans use semantic concepts such as spatial relations between objects to describe scenes and communicate tasks such as “Put the tea to the right of the cup” or “Move the plate between the fork and the spoon.” Just as children, assistive robots must be able to learn the sub-symbolic meaning of such...

Descripción completa

Detalles Bibliográficos
Autores principales: Kartmann, Rainer, Asfour, Tamim
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Frontiers Media S.A. 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10232811/
https://www.ncbi.nlm.nih.gov/pubmed/37275214
http://dx.doi.org/10.3389/frobt.2023.1151303
_version_ 1785052075222106112
author Kartmann, Rainer
Asfour, Tamim
author_facet Kartmann, Rainer
Asfour, Tamim
author_sort Kartmann, Rainer
collection PubMed
description Humans use semantic concepts such as spatial relations between objects to describe scenes and communicate tasks such as “Put the tea to the right of the cup” or “Move the plate between the fork and the spoon.” Just as children, assistive robots must be able to learn the sub-symbolic meaning of such concepts from human demonstrations and instructions. We address the problem of incrementally learning geometric models of spatial relations from few demonstrations collected online during interaction with a human. Such models enable a robot to manipulate objects in order to fulfill desired spatial relations specified by verbal instructions. At the start, we assume the robot has no geometric model of spatial relations. Given a task as above, the robot requests the user to demonstrate the task once in order to create a model from a single demonstration, leveraging cylindrical probability distribution as generative representation of spatial relations. We show how this model can be updated incrementally with each new demonstration without access to past examples in a sample-efficient way using incremental maximum likelihood estimation, and demonstrate the approach on a real humanoid robot.
format Online
Article
Text
id pubmed-10232811
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher Frontiers Media S.A.
record_format MEDLINE/PubMed
spelling pubmed-102328112023-06-02 Interactive and incremental learning of spatial object relations from human demonstrations Kartmann, Rainer Asfour, Tamim Front Robot AI Robotics and AI Humans use semantic concepts such as spatial relations between objects to describe scenes and communicate tasks such as “Put the tea to the right of the cup” or “Move the plate between the fork and the spoon.” Just as children, assistive robots must be able to learn the sub-symbolic meaning of such concepts from human demonstrations and instructions. We address the problem of incrementally learning geometric models of spatial relations from few demonstrations collected online during interaction with a human. Such models enable a robot to manipulate objects in order to fulfill desired spatial relations specified by verbal instructions. At the start, we assume the robot has no geometric model of spatial relations. Given a task as above, the robot requests the user to demonstrate the task once in order to create a model from a single demonstration, leveraging cylindrical probability distribution as generative representation of spatial relations. We show how this model can be updated incrementally with each new demonstration without access to past examples in a sample-efficient way using incremental maximum likelihood estimation, and demonstrate the approach on a real humanoid robot. Frontiers Media S.A. 2023-05-18 /pmc/articles/PMC10232811/ /pubmed/37275214 http://dx.doi.org/10.3389/frobt.2023.1151303 Text en Copyright © 2023 Kartmann and Asfour. https://creativecommons.org/licenses/by/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
spellingShingle Robotics and AI
Kartmann, Rainer
Asfour, Tamim
Interactive and incremental learning of spatial object relations from human demonstrations
title Interactive and incremental learning of spatial object relations from human demonstrations
title_full Interactive and incremental learning of spatial object relations from human demonstrations
title_fullStr Interactive and incremental learning of spatial object relations from human demonstrations
title_full_unstemmed Interactive and incremental learning of spatial object relations from human demonstrations
title_short Interactive and incremental learning of spatial object relations from human demonstrations
title_sort interactive and incremental learning of spatial object relations from human demonstrations
topic Robotics and AI
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10232811/
https://www.ncbi.nlm.nih.gov/pubmed/37275214
http://dx.doi.org/10.3389/frobt.2023.1151303
work_keys_str_mv AT kartmannrainer interactiveandincrementallearningofspatialobjectrelationsfromhumandemonstrations
AT asfourtamim interactiveandincrementallearningofspatialobjectrelationsfromhumandemonstrations