Cargando…

Tool-Use Model to Reproduce the Goal Situations Considering Relationship Among Tools, Objects, Actions and Effects Using Multimodal Deep Neural Networks

We propose a tool-use model that enables a robot to act toward a provided goal. It is important to consider features of the four factors; tools, objects actions, and effects at the same time because they are related to each other and one factor can influence the others. The tool-use model is constru...

Descripción completa

Detalles Bibliográficos
Autores principales: Saito, Namiko, Ogata, Tetsuya, Mori, Hiroki, Murata, Shingo, Sugano, Shigeki
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Frontiers Media S.A. 2021
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8510504/
https://www.ncbi.nlm.nih.gov/pubmed/34651020
http://dx.doi.org/10.3389/frobt.2021.748716
_version_ 1784582588661235712
author Saito, Namiko
Ogata, Tetsuya
Mori, Hiroki
Murata, Shingo
Sugano, Shigeki
author_facet Saito, Namiko
Ogata, Tetsuya
Mori, Hiroki
Murata, Shingo
Sugano, Shigeki
author_sort Saito, Namiko
collection PubMed
description We propose a tool-use model that enables a robot to act toward a provided goal. It is important to consider features of the four factors; tools, objects actions, and effects at the same time because they are related to each other and one factor can influence the others. The tool-use model is constructed with deep neural networks (DNNs) using multimodal sensorimotor data; image, force, and joint angle information. To allow the robot to learn tool-use, we collect training data by controlling the robot to perform various object operations using several tools with multiple actions that leads different effects. Then the tool-use model is thereby trained and learns sensorimotor coordination and acquires relationships among tools, objects, actions and effects in its latent space. We can give the robot a task goal by providing an image showing the target placement and orientation of the object. Using the goal image with the tool-use model, the robot detects the features of tools and objects, and determines how to act to reproduce the target effects automatically. Then the robot generates actions adjusting to the real time situations even though the tools and objects are unknown and more complicated than trained ones.
format Online
Article
Text
id pubmed-8510504
institution National Center for Biotechnology Information
language English
publishDate 2021
publisher Frontiers Media S.A.
record_format MEDLINE/PubMed
spelling pubmed-85105042021-10-13 Tool-Use Model to Reproduce the Goal Situations Considering Relationship Among Tools, Objects, Actions and Effects Using Multimodal Deep Neural Networks Saito, Namiko Ogata, Tetsuya Mori, Hiroki Murata, Shingo Sugano, Shigeki Front Robot AI Robotics and AI We propose a tool-use model that enables a robot to act toward a provided goal. It is important to consider features of the four factors; tools, objects actions, and effects at the same time because they are related to each other and one factor can influence the others. The tool-use model is constructed with deep neural networks (DNNs) using multimodal sensorimotor data; image, force, and joint angle information. To allow the robot to learn tool-use, we collect training data by controlling the robot to perform various object operations using several tools with multiple actions that leads different effects. Then the tool-use model is thereby trained and learns sensorimotor coordination and acquires relationships among tools, objects, actions and effects in its latent space. We can give the robot a task goal by providing an image showing the target placement and orientation of the object. Using the goal image with the tool-use model, the robot detects the features of tools and objects, and determines how to act to reproduce the target effects automatically. Then the robot generates actions adjusting to the real time situations even though the tools and objects are unknown and more complicated than trained ones. Frontiers Media S.A. 2021-09-28 /pmc/articles/PMC8510504/ /pubmed/34651020 http://dx.doi.org/10.3389/frobt.2021.748716 Text en Copyright © 2021 Saito, Ogata, Mori, Murata and Sugano. https://creativecommons.org/licenses/by/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
spellingShingle Robotics and AI
Saito, Namiko
Ogata, Tetsuya
Mori, Hiroki
Murata, Shingo
Sugano, Shigeki
Tool-Use Model to Reproduce the Goal Situations Considering Relationship Among Tools, Objects, Actions and Effects Using Multimodal Deep Neural Networks
title Tool-Use Model to Reproduce the Goal Situations Considering Relationship Among Tools, Objects, Actions and Effects Using Multimodal Deep Neural Networks
title_full Tool-Use Model to Reproduce the Goal Situations Considering Relationship Among Tools, Objects, Actions and Effects Using Multimodal Deep Neural Networks
title_fullStr Tool-Use Model to Reproduce the Goal Situations Considering Relationship Among Tools, Objects, Actions and Effects Using Multimodal Deep Neural Networks
title_full_unstemmed Tool-Use Model to Reproduce the Goal Situations Considering Relationship Among Tools, Objects, Actions and Effects Using Multimodal Deep Neural Networks
title_short Tool-Use Model to Reproduce the Goal Situations Considering Relationship Among Tools, Objects, Actions and Effects Using Multimodal Deep Neural Networks
title_sort tool-use model to reproduce the goal situations considering relationship among tools, objects, actions and effects using multimodal deep neural networks
topic Robotics and AI
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8510504/
https://www.ncbi.nlm.nih.gov/pubmed/34651020
http://dx.doi.org/10.3389/frobt.2021.748716
work_keys_str_mv AT saitonamiko toolusemodeltoreproducethegoalsituationsconsideringrelationshipamongtoolsobjectsactionsandeffectsusingmultimodaldeepneuralnetworks
AT ogatatetsuya toolusemodeltoreproducethegoalsituationsconsideringrelationshipamongtoolsobjectsactionsandeffectsusingmultimodaldeepneuralnetworks
AT morihiroki toolusemodeltoreproducethegoalsituationsconsideringrelationshipamongtoolsobjectsactionsandeffectsusingmultimodaldeepneuralnetworks
AT muratashingo toolusemodeltoreproducethegoalsituationsconsideringrelationshipamongtoolsobjectsactionsandeffectsusingmultimodaldeepneuralnetworks
AT suganoshigeki toolusemodeltoreproducethegoalsituationsconsideringrelationshipamongtoolsobjectsactionsandeffectsusingmultimodaldeepneuralnetworks