Cargando…

Hierarchical Spatial Concept Formation Based on Multimodal Information for Human Support Robots

In this paper, we propose a hierarchical spatial concept formation method based on the Bayesian generative model with multimodal information e.g., vision, position and word information. Since humans have the ability to select an appropriate level of abstraction according to the situation and describ...

Descripción completa

Detalles Bibliográficos
Autores principales: Hagiwara, Yoshinobu, Inoue, Masakazu, Kobayashi, Hiroyoshi, Taniguchi, Tadahiro
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Frontiers Media S.A. 2018
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5859180/
https://www.ncbi.nlm.nih.gov/pubmed/29593521
http://dx.doi.org/10.3389/fnbot.2018.00011
_version_ 1783307768570052608
author Hagiwara, Yoshinobu
Inoue, Masakazu
Kobayashi, Hiroyoshi
Taniguchi, Tadahiro
author_facet Hagiwara, Yoshinobu
Inoue, Masakazu
Kobayashi, Hiroyoshi
Taniguchi, Tadahiro
author_sort Hagiwara, Yoshinobu
collection PubMed
description In this paper, we propose a hierarchical spatial concept formation method based on the Bayesian generative model with multimodal information e.g., vision, position and word information. Since humans have the ability to select an appropriate level of abstraction according to the situation and describe their position linguistically, e.g., “I am in my home” and “I am in front of the table,” a hierarchical structure of spatial concepts is necessary in order for human support robots to communicate smoothly with users. The proposed method enables a robot to form hierarchical spatial concepts by categorizing multimodal information using hierarchical multimodal latent Dirichlet allocation (hMLDA). Object recognition results using convolutional neural network (CNN), hierarchical k-means clustering result of self-position estimated by Monte Carlo localization (MCL), and a set of location names are used, respectively, as features in vision, position, and word information. Experiments in forming hierarchical spatial concepts and evaluating how the proposed method can predict unobserved location names and position categories are performed using a robot in the real world. Results verify that, relative to comparable baseline methods, the proposed method enables a robot to predict location names and position categories closer to predictions made by humans. As an application example of the proposed method in a home environment, a demonstration in which a human support robot moves to an instructed place based on human speech instructions is achieved based on the formed hierarchical spatial concept.
format Online
Article
Text
id pubmed-5859180
institution National Center for Biotechnology Information
language English
publishDate 2018
publisher Frontiers Media S.A.
record_format MEDLINE/PubMed
spelling pubmed-58591802018-03-28 Hierarchical Spatial Concept Formation Based on Multimodal Information for Human Support Robots Hagiwara, Yoshinobu Inoue, Masakazu Kobayashi, Hiroyoshi Taniguchi, Tadahiro Front Neurorobot Neuroscience In this paper, we propose a hierarchical spatial concept formation method based on the Bayesian generative model with multimodal information e.g., vision, position and word information. Since humans have the ability to select an appropriate level of abstraction according to the situation and describe their position linguistically, e.g., “I am in my home” and “I am in front of the table,” a hierarchical structure of spatial concepts is necessary in order for human support robots to communicate smoothly with users. The proposed method enables a robot to form hierarchical spatial concepts by categorizing multimodal information using hierarchical multimodal latent Dirichlet allocation (hMLDA). Object recognition results using convolutional neural network (CNN), hierarchical k-means clustering result of self-position estimated by Monte Carlo localization (MCL), and a set of location names are used, respectively, as features in vision, position, and word information. Experiments in forming hierarchical spatial concepts and evaluating how the proposed method can predict unobserved location names and position categories are performed using a robot in the real world. Results verify that, relative to comparable baseline methods, the proposed method enables a robot to predict location names and position categories closer to predictions made by humans. As an application example of the proposed method in a home environment, a demonstration in which a human support robot moves to an instructed place based on human speech instructions is achieved based on the formed hierarchical spatial concept. Frontiers Media S.A. 2018-03-13 /pmc/articles/PMC5859180/ /pubmed/29593521 http://dx.doi.org/10.3389/fnbot.2018.00011 Text en Copyright © 2018 Hagiwara, Inoue, Kobayashi and Taniguchi. http://creativecommons.org/licenses/by/4.0/ This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
spellingShingle Neuroscience
Hagiwara, Yoshinobu
Inoue, Masakazu
Kobayashi, Hiroyoshi
Taniguchi, Tadahiro
Hierarchical Spatial Concept Formation Based on Multimodal Information for Human Support Robots
title Hierarchical Spatial Concept Formation Based on Multimodal Information for Human Support Robots
title_full Hierarchical Spatial Concept Formation Based on Multimodal Information for Human Support Robots
title_fullStr Hierarchical Spatial Concept Formation Based on Multimodal Information for Human Support Robots
title_full_unstemmed Hierarchical Spatial Concept Formation Based on Multimodal Information for Human Support Robots
title_short Hierarchical Spatial Concept Formation Based on Multimodal Information for Human Support Robots
title_sort hierarchical spatial concept formation based on multimodal information for human support robots
topic Neuroscience
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5859180/
https://www.ncbi.nlm.nih.gov/pubmed/29593521
http://dx.doi.org/10.3389/fnbot.2018.00011
work_keys_str_mv AT hagiwarayoshinobu hierarchicalspatialconceptformationbasedonmultimodalinformationforhumansupportrobots
AT inouemasakazu hierarchicalspatialconceptformationbasedonmultimodalinformationforhumansupportrobots
AT kobayashihiroyoshi hierarchicalspatialconceptformationbasedonmultimodalinformationforhumansupportrobots
AT taniguchitadahiro hierarchicalspatialconceptformationbasedonmultimodalinformationforhumansupportrobots