Cargando…

Can an android’s posture and movement discriminate against the ambiguous emotion perceived from its facial expressions?

Expressing emotions through various modalities is a crucial function not only for humans but also for robots. The mapping method from facial expressions to the basic emotions is widely used in research on robot emotional expressions. This method claims that there are specific facial muscle activatio...

Descripción completa

Detalles Bibliográficos
Autores principales: Yagi, Satoshi, Nakata, Yoshihiro, Nakamura, Yutaka, Ishiguro, Hiroshi
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Public Library of Science 2021
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8354482/
https://www.ncbi.nlm.nih.gov/pubmed/34375327
http://dx.doi.org/10.1371/journal.pone.0254905
_version_ 1783736602383613952
author Yagi, Satoshi
Nakata, Yoshihiro
Nakamura, Yutaka
Ishiguro, Hiroshi
author_facet Yagi, Satoshi
Nakata, Yoshihiro
Nakamura, Yutaka
Ishiguro, Hiroshi
author_sort Yagi, Satoshi
collection PubMed
description Expressing emotions through various modalities is a crucial function not only for humans but also for robots. The mapping method from facial expressions to the basic emotions is widely used in research on robot emotional expressions. This method claims that there are specific facial muscle activation patterns for each emotional expression and people can perceive these emotions by reading these patterns. However, recent research on human behavior reveals that some emotional expressions, such as the emotion “intense”, are difficult to judge as positive or negative by just looking at the facial expression alone. Nevertheless, it has not been investigated whether robots can also express ambiguous facial expressions with no clear valence and whether the addition of body expressions can make the facial valence clearer to humans. This paper shows that an ambiguous facial expression of an android can be perceived more clearly by viewers when body postures and movements are added. We conducted three experiments and online surveys among North American residents with 94, 114 and 114 participants, respectively. In Experiment 1, by calculating the entropy, we found that the facial expression “intense” was difficult to judge as positive or negative when they were only shown the facial expression. In Experiments 2 and 3, by analyzing ANOVA, we confirmed that participants were better at judging the facial valence when they were shown the whole body of the android, even though the facial expression was the same as in Experiment 1. These results suggest that facial and body expressions by robots should be designed jointly to achieve better communication with humans. In order to achieve smoother cooperative human-robot interaction, such as education by robots, emotion expressions conveyed through a combination of both the face and the body of the robot is necessary to convey the robot’s intentions or desires to humans.
format Online
Article
Text
id pubmed-8354482
institution National Center for Biotechnology Information
language English
publishDate 2021
publisher Public Library of Science
record_format MEDLINE/PubMed
spelling pubmed-83544822021-08-11 Can an android’s posture and movement discriminate against the ambiguous emotion perceived from its facial expressions? Yagi, Satoshi Nakata, Yoshihiro Nakamura, Yutaka Ishiguro, Hiroshi PLoS One Research Article Expressing emotions through various modalities is a crucial function not only for humans but also for robots. The mapping method from facial expressions to the basic emotions is widely used in research on robot emotional expressions. This method claims that there are specific facial muscle activation patterns for each emotional expression and people can perceive these emotions by reading these patterns. However, recent research on human behavior reveals that some emotional expressions, such as the emotion “intense”, are difficult to judge as positive or negative by just looking at the facial expression alone. Nevertheless, it has not been investigated whether robots can also express ambiguous facial expressions with no clear valence and whether the addition of body expressions can make the facial valence clearer to humans. This paper shows that an ambiguous facial expression of an android can be perceived more clearly by viewers when body postures and movements are added. We conducted three experiments and online surveys among North American residents with 94, 114 and 114 participants, respectively. In Experiment 1, by calculating the entropy, we found that the facial expression “intense” was difficult to judge as positive or negative when they were only shown the facial expression. In Experiments 2 and 3, by analyzing ANOVA, we confirmed that participants were better at judging the facial valence when they were shown the whole body of the android, even though the facial expression was the same as in Experiment 1. These results suggest that facial and body expressions by robots should be designed jointly to achieve better communication with humans. In order to achieve smoother cooperative human-robot interaction, such as education by robots, emotion expressions conveyed through a combination of both the face and the body of the robot is necessary to convey the robot’s intentions or desires to humans. Public Library of Science 2021-08-10 /pmc/articles/PMC8354482/ /pubmed/34375327 http://dx.doi.org/10.1371/journal.pone.0254905 Text en © 2021 Yagi et al https://creativecommons.org/licenses/by/4.0/This is an open access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/) , which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
spellingShingle Research Article
Yagi, Satoshi
Nakata, Yoshihiro
Nakamura, Yutaka
Ishiguro, Hiroshi
Can an android’s posture and movement discriminate against the ambiguous emotion perceived from its facial expressions?
title Can an android’s posture and movement discriminate against the ambiguous emotion perceived from its facial expressions?
title_full Can an android’s posture and movement discriminate against the ambiguous emotion perceived from its facial expressions?
title_fullStr Can an android’s posture and movement discriminate against the ambiguous emotion perceived from its facial expressions?
title_full_unstemmed Can an android’s posture and movement discriminate against the ambiguous emotion perceived from its facial expressions?
title_short Can an android’s posture and movement discriminate against the ambiguous emotion perceived from its facial expressions?
title_sort can an android’s posture and movement discriminate against the ambiguous emotion perceived from its facial expressions?
topic Research Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8354482/
https://www.ncbi.nlm.nih.gov/pubmed/34375327
http://dx.doi.org/10.1371/journal.pone.0254905
work_keys_str_mv AT yagisatoshi cananandroidspostureandmovementdiscriminateagainsttheambiguousemotionperceivedfromitsfacialexpressions
AT nakatayoshihiro cananandroidspostureandmovementdiscriminateagainsttheambiguousemotionperceivedfromitsfacialexpressions
AT nakamurayutaka cananandroidspostureandmovementdiscriminateagainsttheambiguousemotionperceivedfromitsfacialexpressions
AT ishigurohiroshi cananandroidspostureandmovementdiscriminateagainsttheambiguousemotionperceivedfromitsfacialexpressions