Cargando…

Can the robot “see” what I see? Robot gaze drives attention depending on mental state attribution

Mentalizing, where humans infer the mental states of others, facilitates understanding and interaction in social situations. Humans also tend to adopt mentalizing strategies when interacting with robotic agents. There is an ongoing debate about how inferred mental states affect gaze following, a key...

Descripción completa

Detalles Bibliográficos
Autores principales: Morillo-Mendez, Lucas, Stower, Rebecca, Sleat, Alex, Schreiter, Tim, Leite, Iolanda, Mozos, Oscar Martinez, Schrooten, Martien G. S.
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Frontiers Media S.A. 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10374202/
https://www.ncbi.nlm.nih.gov/pubmed/37519379
http://dx.doi.org/10.3389/fpsyg.2023.1215771
_version_ 1785078726167363584
author Morillo-Mendez, Lucas
Stower, Rebecca
Sleat, Alex
Schreiter, Tim
Leite, Iolanda
Mozos, Oscar Martinez
Schrooten, Martien G. S.
author_facet Morillo-Mendez, Lucas
Stower, Rebecca
Sleat, Alex
Schreiter, Tim
Leite, Iolanda
Mozos, Oscar Martinez
Schrooten, Martien G. S.
author_sort Morillo-Mendez, Lucas
collection PubMed
description Mentalizing, where humans infer the mental states of others, facilitates understanding and interaction in social situations. Humans also tend to adopt mentalizing strategies when interacting with robotic agents. There is an ongoing debate about how inferred mental states affect gaze following, a key component of joint attention. Although the gaze from a robot induces gaze following, the impact of mental state attribution on robotic gaze following remains unclear. To address this question, we asked forty-nine young adults to perform a gaze cueing task during which mental state attribution was manipulated as follows. Participants sat facing a robot that turned its head to the screen at its left or right. Their task was to respond to targets that appeared either at the screen the robot gazed at or at the other screen. At the baseline, the robot was positioned so that participants would perceive it as being able to see the screens. We expected faster response times to targets at the screen the robot gazed at than targets at the non-gazed screen (i.e., gaze cueing effect). In the experimental condition, the robot's line of sight was occluded by a physical barrier such that participants would perceive it as unable to see the screens. Our results revealed gaze cueing effects in both conditions although the effect was reduced in the occluded condition compared to the baseline. These results add to the expanding fields of social cognition and human-robot interaction by suggesting that mentalizing has an impact on robotic gaze following.
format Online
Article
Text
id pubmed-10374202
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher Frontiers Media S.A.
record_format MEDLINE/PubMed
spelling pubmed-103742022023-07-28 Can the robot “see” what I see? Robot gaze drives attention depending on mental state attribution Morillo-Mendez, Lucas Stower, Rebecca Sleat, Alex Schreiter, Tim Leite, Iolanda Mozos, Oscar Martinez Schrooten, Martien G. S. Front Psychol Psychology Mentalizing, where humans infer the mental states of others, facilitates understanding and interaction in social situations. Humans also tend to adopt mentalizing strategies when interacting with robotic agents. There is an ongoing debate about how inferred mental states affect gaze following, a key component of joint attention. Although the gaze from a robot induces gaze following, the impact of mental state attribution on robotic gaze following remains unclear. To address this question, we asked forty-nine young adults to perform a gaze cueing task during which mental state attribution was manipulated as follows. Participants sat facing a robot that turned its head to the screen at its left or right. Their task was to respond to targets that appeared either at the screen the robot gazed at or at the other screen. At the baseline, the robot was positioned so that participants would perceive it as being able to see the screens. We expected faster response times to targets at the screen the robot gazed at than targets at the non-gazed screen (i.e., gaze cueing effect). In the experimental condition, the robot's line of sight was occluded by a physical barrier such that participants would perceive it as unable to see the screens. Our results revealed gaze cueing effects in both conditions although the effect was reduced in the occluded condition compared to the baseline. These results add to the expanding fields of social cognition and human-robot interaction by suggesting that mentalizing has an impact on robotic gaze following. Frontiers Media S.A. 2023-07-13 /pmc/articles/PMC10374202/ /pubmed/37519379 http://dx.doi.org/10.3389/fpsyg.2023.1215771 Text en Copyright © 2023 Morillo-Mendez, Stower, Sleat, Schreiter, Leite, Mozos and Schrooten. https://creativecommons.org/licenses/by/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
spellingShingle Psychology
Morillo-Mendez, Lucas
Stower, Rebecca
Sleat, Alex
Schreiter, Tim
Leite, Iolanda
Mozos, Oscar Martinez
Schrooten, Martien G. S.
Can the robot “see” what I see? Robot gaze drives attention depending on mental state attribution
title Can the robot “see” what I see? Robot gaze drives attention depending on mental state attribution
title_full Can the robot “see” what I see? Robot gaze drives attention depending on mental state attribution
title_fullStr Can the robot “see” what I see? Robot gaze drives attention depending on mental state attribution
title_full_unstemmed Can the robot “see” what I see? Robot gaze drives attention depending on mental state attribution
title_short Can the robot “see” what I see? Robot gaze drives attention depending on mental state attribution
title_sort can the robot “see” what i see? robot gaze drives attention depending on mental state attribution
topic Psychology
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10374202/
https://www.ncbi.nlm.nih.gov/pubmed/37519379
http://dx.doi.org/10.3389/fpsyg.2023.1215771
work_keys_str_mv AT morillomendezlucas cantherobotseewhatiseerobotgazedrivesattentiondependingonmentalstateattribution
AT stowerrebecca cantherobotseewhatiseerobotgazedrivesattentiondependingonmentalstateattribution
AT sleatalex cantherobotseewhatiseerobotgazedrivesattentiondependingonmentalstateattribution
AT schreitertim cantherobotseewhatiseerobotgazedrivesattentiondependingonmentalstateattribution
AT leiteiolanda cantherobotseewhatiseerobotgazedrivesattentiondependingonmentalstateattribution
AT mozososcarmartinez cantherobotseewhatiseerobotgazedrivesattentiondependingonmentalstateattribution
AT schrootenmartiengs cantherobotseewhatiseerobotgazedrivesattentiondependingonmentalstateattribution