Cargando…
A Biological Inspired Cognitive Framework for Memory-Based Multi-Sensory Joint Attention in Human-Robot Interactive Tasks
One of the fundamental prerequisites for effective collaborations between interactive partners is the mutual sharing of the attentional focus on the same perceptual events. This is referred to as joint attention. In psychological, cognitive, and social sciences, its defining elements have been widel...
Autores principales: | , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Frontiers Media S.A.
2021
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8650613/ https://www.ncbi.nlm.nih.gov/pubmed/34887738 http://dx.doi.org/10.3389/fnbot.2021.648595 |
_version_ | 1784611235950493696 |
---|---|
author | Eldardeer, Omar Gonzalez-Billandon, Jonas Grasse, Lukas Tata, Matthew Rea, Francesco |
author_facet | Eldardeer, Omar Gonzalez-Billandon, Jonas Grasse, Lukas Tata, Matthew Rea, Francesco |
author_sort | Eldardeer, Omar |
collection | PubMed |
description | One of the fundamental prerequisites for effective collaborations between interactive partners is the mutual sharing of the attentional focus on the same perceptual events. This is referred to as joint attention. In psychological, cognitive, and social sciences, its defining elements have been widely pinpointed. Also the field of human-robot interaction has extensively exploited joint attention which has been identified as a fundamental prerequisite for proficient human-robot collaborations. However, joint attention between robots and human partners is often encoded in prefixed robot behaviours that do not fully address the dynamics of interactive scenarios. We provide autonomous attentional behaviour for robotics based on a multi-sensory perception that robustly relocates the focus of attention on the same targets the human partner attends. Further, we investigated how such joint attention between a human and a robot partner improved with a new biologically-inspired memory-based attention component. We assessed the model with the humanoid robot iCub involved in performing a joint task with a human partner in a real-world unstructured scenario. The model showed a robust performance on capturing the stimulation, making a localisation decision in the right time frame, and then executing the right action. We then compared the attention performance of the robot against the human performance when stimulated from the same source across different modalities (audio-visual and audio only). The comparison showed that the model is behaving with temporal dynamics compatible with those of humans. This provides an effective solution for memory-based joint attention in real-world unstructured environments. Further, we analyzed the localisation performances (reaction time and accuracy), the results showed that the robot performed better in an audio-visual condition than an audio only condition. The performance of the robot in the audio-visual condition was relatively comparable with the behaviour of the human participants whereas it was less efficient in audio-only localisation. After a detailed analysis of the internal components of the architecture, we conclude that the differences in performance are due to egonoise which significantly affects the audio-only localisation performance. |
format | Online Article Text |
id | pubmed-8650613 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2021 |
publisher | Frontiers Media S.A. |
record_format | MEDLINE/PubMed |
spelling | pubmed-86506132021-12-08 A Biological Inspired Cognitive Framework for Memory-Based Multi-Sensory Joint Attention in Human-Robot Interactive Tasks Eldardeer, Omar Gonzalez-Billandon, Jonas Grasse, Lukas Tata, Matthew Rea, Francesco Front Neurorobot Neuroscience One of the fundamental prerequisites for effective collaborations between interactive partners is the mutual sharing of the attentional focus on the same perceptual events. This is referred to as joint attention. In psychological, cognitive, and social sciences, its defining elements have been widely pinpointed. Also the field of human-robot interaction has extensively exploited joint attention which has been identified as a fundamental prerequisite for proficient human-robot collaborations. However, joint attention between robots and human partners is often encoded in prefixed robot behaviours that do not fully address the dynamics of interactive scenarios. We provide autonomous attentional behaviour for robotics based on a multi-sensory perception that robustly relocates the focus of attention on the same targets the human partner attends. Further, we investigated how such joint attention between a human and a robot partner improved with a new biologically-inspired memory-based attention component. We assessed the model with the humanoid robot iCub involved in performing a joint task with a human partner in a real-world unstructured scenario. The model showed a robust performance on capturing the stimulation, making a localisation decision in the right time frame, and then executing the right action. We then compared the attention performance of the robot against the human performance when stimulated from the same source across different modalities (audio-visual and audio only). The comparison showed that the model is behaving with temporal dynamics compatible with those of humans. This provides an effective solution for memory-based joint attention in real-world unstructured environments. Further, we analyzed the localisation performances (reaction time and accuracy), the results showed that the robot performed better in an audio-visual condition than an audio only condition. The performance of the robot in the audio-visual condition was relatively comparable with the behaviour of the human participants whereas it was less efficient in audio-only localisation. After a detailed analysis of the internal components of the architecture, we conclude that the differences in performance are due to egonoise which significantly affects the audio-only localisation performance. Frontiers Media S.A. 2021-11-23 /pmc/articles/PMC8650613/ /pubmed/34887738 http://dx.doi.org/10.3389/fnbot.2021.648595 Text en Copyright © 2021 Eldardeer, Gonzalez-Billandon, Grasse, Tata and Rea. https://creativecommons.org/licenses/by/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms. |
spellingShingle | Neuroscience Eldardeer, Omar Gonzalez-Billandon, Jonas Grasse, Lukas Tata, Matthew Rea, Francesco A Biological Inspired Cognitive Framework for Memory-Based Multi-Sensory Joint Attention in Human-Robot Interactive Tasks |
title | A Biological Inspired Cognitive Framework for Memory-Based Multi-Sensory Joint Attention in Human-Robot Interactive Tasks |
title_full | A Biological Inspired Cognitive Framework for Memory-Based Multi-Sensory Joint Attention in Human-Robot Interactive Tasks |
title_fullStr | A Biological Inspired Cognitive Framework for Memory-Based Multi-Sensory Joint Attention in Human-Robot Interactive Tasks |
title_full_unstemmed | A Biological Inspired Cognitive Framework for Memory-Based Multi-Sensory Joint Attention in Human-Robot Interactive Tasks |
title_short | A Biological Inspired Cognitive Framework for Memory-Based Multi-Sensory Joint Attention in Human-Robot Interactive Tasks |
title_sort | biological inspired cognitive framework for memory-based multi-sensory joint attention in human-robot interactive tasks |
topic | Neuroscience |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8650613/ https://www.ncbi.nlm.nih.gov/pubmed/34887738 http://dx.doi.org/10.3389/fnbot.2021.648595 |
work_keys_str_mv | AT eldardeeromar abiologicalinspiredcognitiveframeworkformemorybasedmultisensoryjointattentioninhumanrobotinteractivetasks AT gonzalezbillandonjonas abiologicalinspiredcognitiveframeworkformemorybasedmultisensoryjointattentioninhumanrobotinteractivetasks AT grasselukas abiologicalinspiredcognitiveframeworkformemorybasedmultisensoryjointattentioninhumanrobotinteractivetasks AT tatamatthew abiologicalinspiredcognitiveframeworkformemorybasedmultisensoryjointattentioninhumanrobotinteractivetasks AT reafrancesco abiologicalinspiredcognitiveframeworkformemorybasedmultisensoryjointattentioninhumanrobotinteractivetasks AT eldardeeromar biologicalinspiredcognitiveframeworkformemorybasedmultisensoryjointattentioninhumanrobotinteractivetasks AT gonzalezbillandonjonas biologicalinspiredcognitiveframeworkformemorybasedmultisensoryjointattentioninhumanrobotinteractivetasks AT grasselukas biologicalinspiredcognitiveframeworkformemorybasedmultisensoryjointattentioninhumanrobotinteractivetasks AT tatamatthew biologicalinspiredcognitiveframeworkformemorybasedmultisensoryjointattentioninhumanrobotinteractivetasks AT reafrancesco biologicalinspiredcognitiveframeworkformemorybasedmultisensoryjointattentioninhumanrobotinteractivetasks |