Cargando…

On the Time Course of Vocal Emotion Recognition

How quickly do listeners recognize emotions from a speaker's voice, and does the time course for recognition vary by emotion type? To address these questions, we adapted the auditory gating paradigm to estimate how much vocal information is needed for listeners to categorize five basic emotions...

Descripción completa

Detalles Bibliográficos
Autores principales: Pell, Marc D., Kotz, Sonja A.
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Public Library of Science 2011
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3210149/
https://www.ncbi.nlm.nih.gov/pubmed/22087275
http://dx.doi.org/10.1371/journal.pone.0027256
_version_ 1782215715986604032
author Pell, Marc D.
Kotz, Sonja A.
author_facet Pell, Marc D.
Kotz, Sonja A.
author_sort Pell, Marc D.
collection PubMed
description How quickly do listeners recognize emotions from a speaker's voice, and does the time course for recognition vary by emotion type? To address these questions, we adapted the auditory gating paradigm to estimate how much vocal information is needed for listeners to categorize five basic emotions (anger, disgust, fear, sadness, happiness) and neutral utterances produced by male and female speakers of English. Semantically-anomalous pseudo-utterances (e.g., The rivix jolled the silling) conveying each emotion were divided into seven gate intervals according to the number of syllables that listeners heard from sentence onset. Participants (n = 48) judged the emotional meaning of stimuli presented at each gate duration interval, in a successive, blocked presentation format. Analyses looked at how recognition of each emotion evolves as an utterance unfolds and estimated the “identification point” for each emotion. Results showed that anger, sadness, fear, and neutral expressions are recognized more accurately at short gate intervals than happiness, and particularly disgust; however, as speech unfolds, recognition of happiness improves significantly towards the end of the utterance (and fear is recognized more accurately than other emotions). When the gate associated with the emotion identification point of each stimulus was calculated, data indicated that fear (M = 517 ms), sadness (M = 576 ms), and neutral (M = 510 ms) expressions were identified from shorter acoustic events than the other emotions. These data reveal differences in the underlying time course for conscious recognition of basic emotions from vocal expressions, which should be accounted for in studies of emotional speech processing.
format Online
Article
Text
id pubmed-3210149
institution National Center for Biotechnology Information
language English
publishDate 2011
publisher Public Library of Science
record_format MEDLINE/PubMed
spelling pubmed-32101492011-11-15 On the Time Course of Vocal Emotion Recognition Pell, Marc D. Kotz, Sonja A. PLoS One Research Article How quickly do listeners recognize emotions from a speaker's voice, and does the time course for recognition vary by emotion type? To address these questions, we adapted the auditory gating paradigm to estimate how much vocal information is needed for listeners to categorize five basic emotions (anger, disgust, fear, sadness, happiness) and neutral utterances produced by male and female speakers of English. Semantically-anomalous pseudo-utterances (e.g., The rivix jolled the silling) conveying each emotion were divided into seven gate intervals according to the number of syllables that listeners heard from sentence onset. Participants (n = 48) judged the emotional meaning of stimuli presented at each gate duration interval, in a successive, blocked presentation format. Analyses looked at how recognition of each emotion evolves as an utterance unfolds and estimated the “identification point” for each emotion. Results showed that anger, sadness, fear, and neutral expressions are recognized more accurately at short gate intervals than happiness, and particularly disgust; however, as speech unfolds, recognition of happiness improves significantly towards the end of the utterance (and fear is recognized more accurately than other emotions). When the gate associated with the emotion identification point of each stimulus was calculated, data indicated that fear (M = 517 ms), sadness (M = 576 ms), and neutral (M = 510 ms) expressions were identified from shorter acoustic events than the other emotions. These data reveal differences in the underlying time course for conscious recognition of basic emotions from vocal expressions, which should be accounted for in studies of emotional speech processing. Public Library of Science 2011-11-07 /pmc/articles/PMC3210149/ /pubmed/22087275 http://dx.doi.org/10.1371/journal.pone.0027256 Text en Pell, Kotz. http://creativecommons.org/licenses/by/4.0/ This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are properly credited.
spellingShingle Research Article
Pell, Marc D.
Kotz, Sonja A.
On the Time Course of Vocal Emotion Recognition
title On the Time Course of Vocal Emotion Recognition
title_full On the Time Course of Vocal Emotion Recognition
title_fullStr On the Time Course of Vocal Emotion Recognition
title_full_unstemmed On the Time Course of Vocal Emotion Recognition
title_short On the Time Course of Vocal Emotion Recognition
title_sort on the time course of vocal emotion recognition
topic Research Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3210149/
https://www.ncbi.nlm.nih.gov/pubmed/22087275
http://dx.doi.org/10.1371/journal.pone.0027256
work_keys_str_mv AT pellmarcd onthetimecourseofvocalemotionrecognition
AT kotzsonjaa onthetimecourseofvocalemotionrecognition