Cargando…
The Sound of Emotion: Pinpointing Emotional Voice Processing Via Frequency Tagging EEG
Successfully engaging in social communication requires efficient processing of subtle socio-communicative cues. Voices convey a wealth of social information, such as gender, identity, and the emotional state of the speaker. We tested whether our brain can systematically and automatically differentia...
Autores principales: | , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
MDPI
2023
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9954097/ https://www.ncbi.nlm.nih.gov/pubmed/36831705 http://dx.doi.org/10.3390/brainsci13020162 |
_version_ | 1784894042274791424 |
---|---|
author | Vos, Silke Collignon, Olivier Boets, Bart |
author_facet | Vos, Silke Collignon, Olivier Boets, Bart |
author_sort | Vos, Silke |
collection | PubMed |
description | Successfully engaging in social communication requires efficient processing of subtle socio-communicative cues. Voices convey a wealth of social information, such as gender, identity, and the emotional state of the speaker. We tested whether our brain can systematically and automatically differentiate and track a periodic stream of emotional utterances among a series of neutral vocal utterances. We recorded frequency-tagged EEG responses of 20 neurotypical male adults while presenting streams of neutral utterances at a 4 Hz base rate, interleaved with emotional utterances every third stimulus, hence at a 1.333 Hz oddball frequency. Four emotions (happy, sad, angry, and fear) were presented as different conditions in different streams. To control the impact of low-level acoustic cues, we maximized variability among the stimuli and included a control condition with scrambled utterances. This scrambling preserves low-level acoustic characteristics but ensures that the emotional character is no longer recognizable. Results revealed significant oddball EEG responses for all conditions, indicating that every emotion category can be discriminated from the neutral stimuli, and every emotional oddball response was significantly higher than the response for the scrambled utterances. These findings demonstrate that emotion discrimination is fast, automatic, and is not merely driven by low-level perceptual features. Eventually, here, we present a new database for vocal emotion research with short emotional utterances (EVID) together with an innovative frequency-tagging EEG paradigm for implicit vocal emotion discrimination. |
format | Online Article Text |
id | pubmed-9954097 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2023 |
publisher | MDPI |
record_format | MEDLINE/PubMed |
spelling | pubmed-99540972023-02-25 The Sound of Emotion: Pinpointing Emotional Voice Processing Via Frequency Tagging EEG Vos, Silke Collignon, Olivier Boets, Bart Brain Sci Article Successfully engaging in social communication requires efficient processing of subtle socio-communicative cues. Voices convey a wealth of social information, such as gender, identity, and the emotional state of the speaker. We tested whether our brain can systematically and automatically differentiate and track a periodic stream of emotional utterances among a series of neutral vocal utterances. We recorded frequency-tagged EEG responses of 20 neurotypical male adults while presenting streams of neutral utterances at a 4 Hz base rate, interleaved with emotional utterances every third stimulus, hence at a 1.333 Hz oddball frequency. Four emotions (happy, sad, angry, and fear) were presented as different conditions in different streams. To control the impact of low-level acoustic cues, we maximized variability among the stimuli and included a control condition with scrambled utterances. This scrambling preserves low-level acoustic characteristics but ensures that the emotional character is no longer recognizable. Results revealed significant oddball EEG responses for all conditions, indicating that every emotion category can be discriminated from the neutral stimuli, and every emotional oddball response was significantly higher than the response for the scrambled utterances. These findings demonstrate that emotion discrimination is fast, automatic, and is not merely driven by low-level perceptual features. Eventually, here, we present a new database for vocal emotion research with short emotional utterances (EVID) together with an innovative frequency-tagging EEG paradigm for implicit vocal emotion discrimination. MDPI 2023-01-18 /pmc/articles/PMC9954097/ /pubmed/36831705 http://dx.doi.org/10.3390/brainsci13020162 Text en © 2023 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). |
spellingShingle | Article Vos, Silke Collignon, Olivier Boets, Bart The Sound of Emotion: Pinpointing Emotional Voice Processing Via Frequency Tagging EEG |
title | The Sound of Emotion: Pinpointing Emotional Voice Processing Via Frequency Tagging EEG |
title_full | The Sound of Emotion: Pinpointing Emotional Voice Processing Via Frequency Tagging EEG |
title_fullStr | The Sound of Emotion: Pinpointing Emotional Voice Processing Via Frequency Tagging EEG |
title_full_unstemmed | The Sound of Emotion: Pinpointing Emotional Voice Processing Via Frequency Tagging EEG |
title_short | The Sound of Emotion: Pinpointing Emotional Voice Processing Via Frequency Tagging EEG |
title_sort | sound of emotion: pinpointing emotional voice processing via frequency tagging eeg |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9954097/ https://www.ncbi.nlm.nih.gov/pubmed/36831705 http://dx.doi.org/10.3390/brainsci13020162 |
work_keys_str_mv | AT vossilke thesoundofemotionpinpointingemotionalvoiceprocessingviafrequencytaggingeeg AT collignonolivier thesoundofemotionpinpointingemotionalvoiceprocessingviafrequencytaggingeeg AT boetsbart thesoundofemotionpinpointingemotionalvoiceprocessingviafrequencytaggingeeg AT vossilke soundofemotionpinpointingemotionalvoiceprocessingviafrequencytaggingeeg AT collignonolivier soundofemotionpinpointingemotionalvoiceprocessingviafrequencytaggingeeg AT boetsbart soundofemotionpinpointingemotionalvoiceprocessingviafrequencytaggingeeg |