Cargando…

Cross-Situational Learning with Bayesian Generative Models for Multimodal Category and Word Learning in Robots

In this paper, we propose a Bayesian generative model that can form multiple categories based on each sensory-channel and can associate words with any of the four sensory-channels (action, position, object, and color). This paper focuses on cross-situational learning using the co-occurrence between...

Descripción completa

Detalles Bibliográficos
Autores principales: Taniguchi, Akira, Taniguchi, Tadahiro, Cangelosi, Angelo
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Frontiers Media S.A. 2017
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5742219/
https://www.ncbi.nlm.nih.gov/pubmed/29311888
http://dx.doi.org/10.3389/fnbot.2017.00066
_version_ 1783288334398783488
author Taniguchi, Akira
Taniguchi, Tadahiro
Cangelosi, Angelo
author_facet Taniguchi, Akira
Taniguchi, Tadahiro
Cangelosi, Angelo
author_sort Taniguchi, Akira
collection PubMed
description In this paper, we propose a Bayesian generative model that can form multiple categories based on each sensory-channel and can associate words with any of the four sensory-channels (action, position, object, and color). This paper focuses on cross-situational learning using the co-occurrence between words and information of sensory-channels in complex situations rather than conventional situations of cross-situational learning. We conducted a learning scenario using a simulator and a real humanoid iCub robot. In the scenario, a human tutor provided a sentence that describes an object of visual attention and an accompanying action to the robot. The scenario was set as follows: the number of words per sensory-channel was three or four, and the number of trials for learning was 20 and 40 for the simulator and 25 and 40 for the real robot. The experimental results showed that the proposed method was able to estimate the multiple categorizations and to learn the relationships between multiple sensory-channels and words accurately. In addition, we conducted an action generation task and an action description task based on word meanings learned in the cross-situational learning scenario. The experimental results showed that the robot could successfully use the word meanings learned by using the proposed method.
format Online
Article
Text
id pubmed-5742219
institution National Center for Biotechnology Information
language English
publishDate 2017
publisher Frontiers Media S.A.
record_format MEDLINE/PubMed
spelling pubmed-57422192018-01-08 Cross-Situational Learning with Bayesian Generative Models for Multimodal Category and Word Learning in Robots Taniguchi, Akira Taniguchi, Tadahiro Cangelosi, Angelo Front Neurorobot Neuroscience In this paper, we propose a Bayesian generative model that can form multiple categories based on each sensory-channel and can associate words with any of the four sensory-channels (action, position, object, and color). This paper focuses on cross-situational learning using the co-occurrence between words and information of sensory-channels in complex situations rather than conventional situations of cross-situational learning. We conducted a learning scenario using a simulator and a real humanoid iCub robot. In the scenario, a human tutor provided a sentence that describes an object of visual attention and an accompanying action to the robot. The scenario was set as follows: the number of words per sensory-channel was three or four, and the number of trials for learning was 20 and 40 for the simulator and 25 and 40 for the real robot. The experimental results showed that the proposed method was able to estimate the multiple categorizations and to learn the relationships between multiple sensory-channels and words accurately. In addition, we conducted an action generation task and an action description task based on word meanings learned in the cross-situational learning scenario. The experimental results showed that the robot could successfully use the word meanings learned by using the proposed method. Frontiers Media S.A. 2017-12-19 /pmc/articles/PMC5742219/ /pubmed/29311888 http://dx.doi.org/10.3389/fnbot.2017.00066 Text en Copyright © 2017 Taniguchi, Taniguchi and Cangelosi. http://creativecommons.org/licenses/by/4.0/ This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
spellingShingle Neuroscience
Taniguchi, Akira
Taniguchi, Tadahiro
Cangelosi, Angelo
Cross-Situational Learning with Bayesian Generative Models for Multimodal Category and Word Learning in Robots
title Cross-Situational Learning with Bayesian Generative Models for Multimodal Category and Word Learning in Robots
title_full Cross-Situational Learning with Bayesian Generative Models for Multimodal Category and Word Learning in Robots
title_fullStr Cross-Situational Learning with Bayesian Generative Models for Multimodal Category and Word Learning in Robots
title_full_unstemmed Cross-Situational Learning with Bayesian Generative Models for Multimodal Category and Word Learning in Robots
title_short Cross-Situational Learning with Bayesian Generative Models for Multimodal Category and Word Learning in Robots
title_sort cross-situational learning with bayesian generative models for multimodal category and word learning in robots
topic Neuroscience
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5742219/
https://www.ncbi.nlm.nih.gov/pubmed/29311888
http://dx.doi.org/10.3389/fnbot.2017.00066
work_keys_str_mv AT taniguchiakira crosssituationallearningwithbayesiangenerativemodelsformultimodalcategoryandwordlearninginrobots
AT taniguchitadahiro crosssituationallearningwithbayesiangenerativemodelsformultimodalcategoryandwordlearninginrobots
AT cangelosiangelo crosssituationallearningwithbayesiangenerativemodelsformultimodalcategoryandwordlearninginrobots