Cargando…

What Can You See? Identifying Cues on Internal States From the Movements of Natural Social Interactions

In recent years, the field of Human-Robot Interaction (HRI) has seen an increasing demand for technologies that can recognize and adapt to human behaviors and internal states (e.g., emotions and intentions). Psychological research suggests that human movements are important for inferring internal st...

Descripción completa

Detalles Bibliográficos
Autores principales: Bartlett, Madeleine E., Edmunds, Charlotte E. R., Belpaeme, Tony, Thill, Serge, Lemaignan, Séverin
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Frontiers Media S.A. 2019
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7805824/
https://www.ncbi.nlm.nih.gov/pubmed/33501065
http://dx.doi.org/10.3389/frobt.2019.00049
_version_ 1783636388836540416
author Bartlett, Madeleine E.
Edmunds, Charlotte E. R.
Belpaeme, Tony
Thill, Serge
Lemaignan, Séverin
author_facet Bartlett, Madeleine E.
Edmunds, Charlotte E. R.
Belpaeme, Tony
Thill, Serge
Lemaignan, Séverin
author_sort Bartlett, Madeleine E.
collection PubMed
description In recent years, the field of Human-Robot Interaction (HRI) has seen an increasing demand for technologies that can recognize and adapt to human behaviors and internal states (e.g., emotions and intentions). Psychological research suggests that human movements are important for inferring internal states. There is, however, a need to better understand what kind of information can be extracted from movement data, particularly in unconstrained, natural interactions. The present study examines which internal states and social constructs humans identify from movement in naturalistic social interactions. Participants either viewed clips of the full scene or processed versions of it displaying 2D positional data. Then, they were asked to fill out questionnaires assessing their social perception of the viewed material. We analyzed whether the full scene clips were more informative than the 2D positional data clips. First, we calculated the inter-rater agreement between participants in both conditions. Then, we employed machine learning classifiers to predict the internal states of the individuals in the videos based on the ratings obtained. Although we found a higher inter-rater agreement for full scenes compared to positional data, the level of agreement in the latter case was still above chance, thus demonstrating that the internal states and social constructs under study were identifiable in both conditions. A factor analysis run on participants' responses showed that participants identified the constructs interaction imbalance, interaction valence and engagement regardless of video condition. The machine learning classifiers achieved a similar performance in both conditions, again supporting the idea that movement alone carries relevant information. Overall, our results suggest it is reasonable to expect a machine learning algorithm, and consequently a robot, to successfully decode and classify a range of internal states and social constructs using low-dimensional data (such as the movements and poses of observed individuals) as input.
format Online
Article
Text
id pubmed-7805824
institution National Center for Biotechnology Information
language English
publishDate 2019
publisher Frontiers Media S.A.
record_format MEDLINE/PubMed
spelling pubmed-78058242021-01-25 What Can You See? Identifying Cues on Internal States From the Movements of Natural Social Interactions Bartlett, Madeleine E. Edmunds, Charlotte E. R. Belpaeme, Tony Thill, Serge Lemaignan, Séverin Front Robot AI Robotics and AI In recent years, the field of Human-Robot Interaction (HRI) has seen an increasing demand for technologies that can recognize and adapt to human behaviors and internal states (e.g., emotions and intentions). Psychological research suggests that human movements are important for inferring internal states. There is, however, a need to better understand what kind of information can be extracted from movement data, particularly in unconstrained, natural interactions. The present study examines which internal states and social constructs humans identify from movement in naturalistic social interactions. Participants either viewed clips of the full scene or processed versions of it displaying 2D positional data. Then, they were asked to fill out questionnaires assessing their social perception of the viewed material. We analyzed whether the full scene clips were more informative than the 2D positional data clips. First, we calculated the inter-rater agreement between participants in both conditions. Then, we employed machine learning classifiers to predict the internal states of the individuals in the videos based on the ratings obtained. Although we found a higher inter-rater agreement for full scenes compared to positional data, the level of agreement in the latter case was still above chance, thus demonstrating that the internal states and social constructs under study were identifiable in both conditions. A factor analysis run on participants' responses showed that participants identified the constructs interaction imbalance, interaction valence and engagement regardless of video condition. The machine learning classifiers achieved a similar performance in both conditions, again supporting the idea that movement alone carries relevant information. Overall, our results suggest it is reasonable to expect a machine learning algorithm, and consequently a robot, to successfully decode and classify a range of internal states and social constructs using low-dimensional data (such as the movements and poses of observed individuals) as input. Frontiers Media S.A. 2019-06-26 /pmc/articles/PMC7805824/ /pubmed/33501065 http://dx.doi.org/10.3389/frobt.2019.00049 Text en Copyright © 2019 Bartlett, Edmunds, Belpaeme, Thill and Lemaignan. http://creativecommons.org/licenses/by/4.0/ This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
spellingShingle Robotics and AI
Bartlett, Madeleine E.
Edmunds, Charlotte E. R.
Belpaeme, Tony
Thill, Serge
Lemaignan, Séverin
What Can You See? Identifying Cues on Internal States From the Movements of Natural Social Interactions
title What Can You See? Identifying Cues on Internal States From the Movements of Natural Social Interactions
title_full What Can You See? Identifying Cues on Internal States From the Movements of Natural Social Interactions
title_fullStr What Can You See? Identifying Cues on Internal States From the Movements of Natural Social Interactions
title_full_unstemmed What Can You See? Identifying Cues on Internal States From the Movements of Natural Social Interactions
title_short What Can You See? Identifying Cues on Internal States From the Movements of Natural Social Interactions
title_sort what can you see? identifying cues on internal states from the movements of natural social interactions
topic Robotics and AI
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7805824/
https://www.ncbi.nlm.nih.gov/pubmed/33501065
http://dx.doi.org/10.3389/frobt.2019.00049
work_keys_str_mv AT bartlettmadeleinee whatcanyouseeidentifyingcuesoninternalstatesfromthemovementsofnaturalsocialinteractions
AT edmundscharlotteer whatcanyouseeidentifyingcuesoninternalstatesfromthemovementsofnaturalsocialinteractions
AT belpaemetony whatcanyouseeidentifyingcuesoninternalstatesfromthemovementsofnaturalsocialinteractions
AT thillserge whatcanyouseeidentifyingcuesoninternalstatesfromthemovementsofnaturalsocialinteractions
AT lemaignanseverin whatcanyouseeidentifyingcuesoninternalstatesfromthemovementsofnaturalsocialinteractions