Cargando…

Interpreting text messages with graphic facial expression by deaf and hearing people

In interpreting verbal messages, humans use not only verbal information but also non-verbal signals such as facial expression. For example, when a person says “yes” with a troubled face, what he or she really means appears ambiguous. In the present study, we examined how deaf and hearing people diff...

Descripción completa

Detalles Bibliográficos
Autores principales: Saegusa, Chihiro, Namatame, Miki, Watanabe, Katsumi
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Frontiers Media S.A. 2015
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4382978/
https://www.ncbi.nlm.nih.gov/pubmed/25883582
http://dx.doi.org/10.3389/fpsyg.2015.00383
_version_ 1782364656846766080
author Saegusa, Chihiro
Namatame, Miki
Watanabe, Katsumi
author_facet Saegusa, Chihiro
Namatame, Miki
Watanabe, Katsumi
author_sort Saegusa, Chihiro
collection PubMed
description In interpreting verbal messages, humans use not only verbal information but also non-verbal signals such as facial expression. For example, when a person says “yes” with a troubled face, what he or she really means appears ambiguous. In the present study, we examined how deaf and hearing people differ in perceiving real meanings in texts accompanied by representations of facial expression. Deaf and hearing participants were asked to imagine that the face presented on the computer monitor was asked a question from another person (e.g., do you like her?). They observed either a realistic or a schematic face with a different magnitude of positive or negative expression on a computer monitor. A balloon that contained either a positive or negative text response to the question appeared at the same time as the face. Then, participants rated how much the individual on the monitor really meant it (i.e., perceived earnestness), using a 7-point scale. Results showed that the facial expression significantly modulated the perceived earnestness. The influence of positive expression on negative text responses was relatively weaker than that of negative expression on positive responses (i.e., “no” tended to mean “no” irrespective of facial expression) for both participant groups. However, this asymmetrical effect was stronger in the hearing group. These results suggest that the contribution of facial expression in perceiving real meanings from text messages is qualitatively similar but quantitatively different between deaf and hearing people.
format Online
Article
Text
id pubmed-4382978
institution National Center for Biotechnology Information
language English
publishDate 2015
publisher Frontiers Media S.A.
record_format MEDLINE/PubMed
spelling pubmed-43829782015-04-16 Interpreting text messages with graphic facial expression by deaf and hearing people Saegusa, Chihiro Namatame, Miki Watanabe, Katsumi Front Psychol Psychology In interpreting verbal messages, humans use not only verbal information but also non-verbal signals such as facial expression. For example, when a person says “yes” with a troubled face, what he or she really means appears ambiguous. In the present study, we examined how deaf and hearing people differ in perceiving real meanings in texts accompanied by representations of facial expression. Deaf and hearing participants were asked to imagine that the face presented on the computer monitor was asked a question from another person (e.g., do you like her?). They observed either a realistic or a schematic face with a different magnitude of positive or negative expression on a computer monitor. A balloon that contained either a positive or negative text response to the question appeared at the same time as the face. Then, participants rated how much the individual on the monitor really meant it (i.e., perceived earnestness), using a 7-point scale. Results showed that the facial expression significantly modulated the perceived earnestness. The influence of positive expression on negative text responses was relatively weaker than that of negative expression on positive responses (i.e., “no” tended to mean “no” irrespective of facial expression) for both participant groups. However, this asymmetrical effect was stronger in the hearing group. These results suggest that the contribution of facial expression in perceiving real meanings from text messages is qualitatively similar but quantitatively different between deaf and hearing people. Frontiers Media S.A. 2015-04-02 /pmc/articles/PMC4382978/ /pubmed/25883582 http://dx.doi.org/10.3389/fpsyg.2015.00383 Text en Copyright © 2015 Saegusa, Namatame and Watanabe. http://creativecommons.org/licenses/by/4.0/ This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
spellingShingle Psychology
Saegusa, Chihiro
Namatame, Miki
Watanabe, Katsumi
Interpreting text messages with graphic facial expression by deaf and hearing people
title Interpreting text messages with graphic facial expression by deaf and hearing people
title_full Interpreting text messages with graphic facial expression by deaf and hearing people
title_fullStr Interpreting text messages with graphic facial expression by deaf and hearing people
title_full_unstemmed Interpreting text messages with graphic facial expression by deaf and hearing people
title_short Interpreting text messages with graphic facial expression by deaf and hearing people
title_sort interpreting text messages with graphic facial expression by deaf and hearing people
topic Psychology
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4382978/
https://www.ncbi.nlm.nih.gov/pubmed/25883582
http://dx.doi.org/10.3389/fpsyg.2015.00383
work_keys_str_mv AT saegusachihiro interpretingtextmessageswithgraphicfacialexpressionbydeafandhearingpeople
AT namatamemiki interpretingtextmessageswithgraphicfacialexpressionbydeafandhearingpeople
AT watanabekatsumi interpretingtextmessageswithgraphicfacialexpressionbydeafandhearingpeople