Cargando…
Crossmodal Language Grounding in an Embodied Neurocognitive Model
Human infants are able to acquire natural language seemingly easily at an early age. Their language learning seems to occur simultaneously with learning other cognitive functions as well as with playful interactions with the environment and caregivers. From a neuroscientific perspective, natural lan...
Autores principales: | , , , , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Frontiers Media S.A.
2020
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7591775/ https://www.ncbi.nlm.nih.gov/pubmed/33154720 http://dx.doi.org/10.3389/fnbot.2020.00052 |
_version_ | 1783601055560368128 |
---|---|
author | Heinrich, Stefan Yao, Yuan Hinz, Tobias Liu, Zhiyuan Hummel, Thomas Kerzel, Matthias Weber, Cornelius Wermter, Stefan |
author_facet | Heinrich, Stefan Yao, Yuan Hinz, Tobias Liu, Zhiyuan Hummel, Thomas Kerzel, Matthias Weber, Cornelius Wermter, Stefan |
author_sort | Heinrich, Stefan |
collection | PubMed |
description | Human infants are able to acquire natural language seemingly easily at an early age. Their language learning seems to occur simultaneously with learning other cognitive functions as well as with playful interactions with the environment and caregivers. From a neuroscientific perspective, natural language is embodied, grounded in most, if not all, sensory and sensorimotor modalities, and acquired by means of crossmodal integration. However, characterizing the underlying mechanisms in the brain is difficult and explaining the grounding of language in crossmodal perception and action remains challenging. In this paper, we present a neurocognitive model for language grounding which reflects bio-inspired mechanisms such as an implicit adaptation of timescales as well as end-to-end multimodal abstraction. It addresses developmental robotic interaction and extends its learning capabilities using larger-scale knowledge-based data. In our scenario, we utilize the humanoid robot NICO in obtaining the EMIL data collection, in which the cognitive robot interacts with objects in a children's playground environment while receiving linguistic labels from a caregiver. The model analysis shows that crossmodally integrated representations are sufficient for acquiring language merely from sensory input through interaction with objects in an environment. The representations self-organize hierarchically and embed temporal and spatial information through composition and decomposition. This model can also provide the basis for further crossmodal integration of perceptually grounded cognitive representations. |
format | Online Article Text |
id | pubmed-7591775 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2020 |
publisher | Frontiers Media S.A. |
record_format | MEDLINE/PubMed |
spelling | pubmed-75917752020-11-04 Crossmodal Language Grounding in an Embodied Neurocognitive Model Heinrich, Stefan Yao, Yuan Hinz, Tobias Liu, Zhiyuan Hummel, Thomas Kerzel, Matthias Weber, Cornelius Wermter, Stefan Front Neurorobot Neuroscience Human infants are able to acquire natural language seemingly easily at an early age. Their language learning seems to occur simultaneously with learning other cognitive functions as well as with playful interactions with the environment and caregivers. From a neuroscientific perspective, natural language is embodied, grounded in most, if not all, sensory and sensorimotor modalities, and acquired by means of crossmodal integration. However, characterizing the underlying mechanisms in the brain is difficult and explaining the grounding of language in crossmodal perception and action remains challenging. In this paper, we present a neurocognitive model for language grounding which reflects bio-inspired mechanisms such as an implicit adaptation of timescales as well as end-to-end multimodal abstraction. It addresses developmental robotic interaction and extends its learning capabilities using larger-scale knowledge-based data. In our scenario, we utilize the humanoid robot NICO in obtaining the EMIL data collection, in which the cognitive robot interacts with objects in a children's playground environment while receiving linguistic labels from a caregiver. The model analysis shows that crossmodally integrated representations are sufficient for acquiring language merely from sensory input through interaction with objects in an environment. The representations self-organize hierarchically and embed temporal and spatial information through composition and decomposition. This model can also provide the basis for further crossmodal integration of perceptually grounded cognitive representations. Frontiers Media S.A. 2020-10-14 /pmc/articles/PMC7591775/ /pubmed/33154720 http://dx.doi.org/10.3389/fnbot.2020.00052 Text en Copyright © 2020 Heinrich, Yao, Hinz, Liu, Hummel, Kerzel, Weber and Wermter. http://creativecommons.org/licenses/by/4.0/ This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms. |
spellingShingle | Neuroscience Heinrich, Stefan Yao, Yuan Hinz, Tobias Liu, Zhiyuan Hummel, Thomas Kerzel, Matthias Weber, Cornelius Wermter, Stefan Crossmodal Language Grounding in an Embodied Neurocognitive Model |
title | Crossmodal Language Grounding in an Embodied Neurocognitive Model |
title_full | Crossmodal Language Grounding in an Embodied Neurocognitive Model |
title_fullStr | Crossmodal Language Grounding in an Embodied Neurocognitive Model |
title_full_unstemmed | Crossmodal Language Grounding in an Embodied Neurocognitive Model |
title_short | Crossmodal Language Grounding in an Embodied Neurocognitive Model |
title_sort | crossmodal language grounding in an embodied neurocognitive model |
topic | Neuroscience |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7591775/ https://www.ncbi.nlm.nih.gov/pubmed/33154720 http://dx.doi.org/10.3389/fnbot.2020.00052 |
work_keys_str_mv | AT heinrichstefan crossmodallanguagegroundinginanembodiedneurocognitivemodel AT yaoyuan crossmodallanguagegroundinginanembodiedneurocognitivemodel AT hinztobias crossmodallanguagegroundinginanembodiedneurocognitivemodel AT liuzhiyuan crossmodallanguagegroundinginanembodiedneurocognitivemodel AT hummelthomas crossmodallanguagegroundinginanembodiedneurocognitivemodel AT kerzelmatthias crossmodallanguagegroundinginanembodiedneurocognitivemodel AT webercornelius crossmodallanguagegroundinginanembodiedneurocognitivemodel AT wermterstefan crossmodallanguagegroundinginanembodiedneurocognitivemodel |