Cargando…
Dissociating task difficulty from incongruence in face-voice emotion integration
In the everyday environment, affective information is conveyed by both the face and the voice. Studies have demonstrated that a concurrently presented voice can alter the way that an emotional face expression is perceived, and vice versa, leading to emotional conflict if the information in the two m...
Autores principales: | , , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Frontiers Media S.A.
2013
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3826561/ https://www.ncbi.nlm.nih.gov/pubmed/24294196 http://dx.doi.org/10.3389/fnhum.2013.00744 |
_version_ | 1782290925817430016 |
---|---|
author | Watson, Rebecca Latinus, Marianne Noguchi, Takao Garrod, Oliver Crabbe, Frances Belin, Pascal |
author_facet | Watson, Rebecca Latinus, Marianne Noguchi, Takao Garrod, Oliver Crabbe, Frances Belin, Pascal |
author_sort | Watson, Rebecca |
collection | PubMed |
description | In the everyday environment, affective information is conveyed by both the face and the voice. Studies have demonstrated that a concurrently presented voice can alter the way that an emotional face expression is perceived, and vice versa, leading to emotional conflict if the information in the two modalities is mismatched. Additionally, evidence suggests that incongruence of emotional valence activates cerebral networks involved in conflict monitoring and resolution. However, it is currently unclear whether this is due to task difficulty—that incongruent stimuli are harder to categorize—or simply to the detection of mismatching information in the two modalities. The aim of the present fMRI study was to examine the neurophysiological correlates of processing incongruent emotional information, independent of task difficulty. Subjects were scanned while judging the emotion of face-voice affective stimuli. Both the face and voice were parametrically morphed between anger and happiness and then paired in all audiovisual combinations, resulting in stimuli each defined by two separate values: the degree of incongruence between the face and voice, and the degree of clarity of the combined face-voice information. Due to the specific morphing procedure utilized, we hypothesized that the clarity value, rather than incongruence value, would better reflect task difficulty. Behavioral data revealed that participants integrated face and voice affective information, and that the clarity, as opposed to incongruence value correlated with categorization difficulty. Cerebrally, incongruence was more associated with activity in the superior temporal region, which emerged after task difficulty had been accounted for. Overall, our results suggest that activation in the superior temporal region in response to incongruent information cannot be explained simply by task difficulty, and may rather be due to detection of mismatching information between the two modalities. |
format | Online Article Text |
id | pubmed-3826561 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2013 |
publisher | Frontiers Media S.A. |
record_format | MEDLINE/PubMed |
spelling | pubmed-38265612013-11-29 Dissociating task difficulty from incongruence in face-voice emotion integration Watson, Rebecca Latinus, Marianne Noguchi, Takao Garrod, Oliver Crabbe, Frances Belin, Pascal Front Hum Neurosci Neuroscience In the everyday environment, affective information is conveyed by both the face and the voice. Studies have demonstrated that a concurrently presented voice can alter the way that an emotional face expression is perceived, and vice versa, leading to emotional conflict if the information in the two modalities is mismatched. Additionally, evidence suggests that incongruence of emotional valence activates cerebral networks involved in conflict monitoring and resolution. However, it is currently unclear whether this is due to task difficulty—that incongruent stimuli are harder to categorize—or simply to the detection of mismatching information in the two modalities. The aim of the present fMRI study was to examine the neurophysiological correlates of processing incongruent emotional information, independent of task difficulty. Subjects were scanned while judging the emotion of face-voice affective stimuli. Both the face and voice were parametrically morphed between anger and happiness and then paired in all audiovisual combinations, resulting in stimuli each defined by two separate values: the degree of incongruence between the face and voice, and the degree of clarity of the combined face-voice information. Due to the specific morphing procedure utilized, we hypothesized that the clarity value, rather than incongruence value, would better reflect task difficulty. Behavioral data revealed that participants integrated face and voice affective information, and that the clarity, as opposed to incongruence value correlated with categorization difficulty. Cerebrally, incongruence was more associated with activity in the superior temporal region, which emerged after task difficulty had been accounted for. Overall, our results suggest that activation in the superior temporal region in response to incongruent information cannot be explained simply by task difficulty, and may rather be due to detection of mismatching information between the two modalities. Frontiers Media S.A. 2013-11-13 /pmc/articles/PMC3826561/ /pubmed/24294196 http://dx.doi.org/10.3389/fnhum.2013.00744 Text en Copyright © 2013 Watson, Latinus, Noguchi, Garrod, Crabbe and Belin. http://creativecommons.org/licenses/by/3.0/ This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms. |
spellingShingle | Neuroscience Watson, Rebecca Latinus, Marianne Noguchi, Takao Garrod, Oliver Crabbe, Frances Belin, Pascal Dissociating task difficulty from incongruence in face-voice emotion integration |
title | Dissociating task difficulty from incongruence in face-voice emotion integration |
title_full | Dissociating task difficulty from incongruence in face-voice emotion integration |
title_fullStr | Dissociating task difficulty from incongruence in face-voice emotion integration |
title_full_unstemmed | Dissociating task difficulty from incongruence in face-voice emotion integration |
title_short | Dissociating task difficulty from incongruence in face-voice emotion integration |
title_sort | dissociating task difficulty from incongruence in face-voice emotion integration |
topic | Neuroscience |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3826561/ https://www.ncbi.nlm.nih.gov/pubmed/24294196 http://dx.doi.org/10.3389/fnhum.2013.00744 |
work_keys_str_mv | AT watsonrebecca dissociatingtaskdifficultyfromincongruenceinfacevoiceemotionintegration AT latinusmarianne dissociatingtaskdifficultyfromincongruenceinfacevoiceemotionintegration AT noguchitakao dissociatingtaskdifficultyfromincongruenceinfacevoiceemotionintegration AT garrodoliver dissociatingtaskdifficultyfromincongruenceinfacevoiceemotionintegration AT crabbefrances dissociatingtaskdifficultyfromincongruenceinfacevoiceemotionintegration AT belinpascal dissociatingtaskdifficultyfromincongruenceinfacevoiceemotionintegration |