Cargando…

Crossmodal benefits to vocal emotion perception in cochlear implant users

Speech comprehension counts as a benchmark outcome of cochlear implants (CIs)—disregarding the communicative importance of efficient integration of audiovisual (AV) socio-emotional information. We investigated effects of time-synchronized facial information on vocal emotion recognition (VER). In Exp...

Descripción completa

Detalles Bibliográficos
Autores principales: von Eiff, Celina Isabelle, Frühholz, Sascha, Korth, Daniela, Guntinas-Lichius, Orlando, Schweinberger, Stefan Robert
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Elsevier 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9791346/
https://www.ncbi.nlm.nih.gov/pubmed/36578321
http://dx.doi.org/10.1016/j.isci.2022.105711
_version_ 1784859385732792320
author von Eiff, Celina Isabelle
Frühholz, Sascha
Korth, Daniela
Guntinas-Lichius, Orlando
Schweinberger, Stefan Robert
author_facet von Eiff, Celina Isabelle
Frühholz, Sascha
Korth, Daniela
Guntinas-Lichius, Orlando
Schweinberger, Stefan Robert
author_sort von Eiff, Celina Isabelle
collection PubMed
description Speech comprehension counts as a benchmark outcome of cochlear implants (CIs)—disregarding the communicative importance of efficient integration of audiovisual (AV) socio-emotional information. We investigated effects of time-synchronized facial information on vocal emotion recognition (VER). In Experiment 1, 26 CI users and normal-hearing (NH) individuals classified emotions for auditory-only, AV congruent, or AV incongruent utterances. In Experiment 2, we compared crossmodal effects between groups with adaptive testing, calibrating auditory difficulty via voice morphs from emotional caricatures to anti-caricatures. CI users performed lower than NH individuals, and VER was correlated with life quality. Importantly, they showed larger benefits to VER with congruent facial emotional information even at equal auditory-only performance levels, suggesting that their larger crossmodal benefits result from deafness-related compensation rather than degraded acoustic representations. Crucially, vocal caricatures enhanced CI users’ VER. Findings advocate AV stimuli during CI rehabilitation and suggest perspectives of caricaturing for both perceptual trainings and sound processor technology.
format Online
Article
Text
id pubmed-9791346
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher Elsevier
record_format MEDLINE/PubMed
spelling pubmed-97913462022-12-27 Crossmodal benefits to vocal emotion perception in cochlear implant users von Eiff, Celina Isabelle Frühholz, Sascha Korth, Daniela Guntinas-Lichius, Orlando Schweinberger, Stefan Robert iScience Article Speech comprehension counts as a benchmark outcome of cochlear implants (CIs)—disregarding the communicative importance of efficient integration of audiovisual (AV) socio-emotional information. We investigated effects of time-synchronized facial information on vocal emotion recognition (VER). In Experiment 1, 26 CI users and normal-hearing (NH) individuals classified emotions for auditory-only, AV congruent, or AV incongruent utterances. In Experiment 2, we compared crossmodal effects between groups with adaptive testing, calibrating auditory difficulty via voice morphs from emotional caricatures to anti-caricatures. CI users performed lower than NH individuals, and VER was correlated with life quality. Importantly, they showed larger benefits to VER with congruent facial emotional information even at equal auditory-only performance levels, suggesting that their larger crossmodal benefits result from deafness-related compensation rather than degraded acoustic representations. Crucially, vocal caricatures enhanced CI users’ VER. Findings advocate AV stimuli during CI rehabilitation and suggest perspectives of caricaturing for both perceptual trainings and sound processor technology. Elsevier 2022-12-02 /pmc/articles/PMC9791346/ /pubmed/36578321 http://dx.doi.org/10.1016/j.isci.2022.105711 Text en © 2022 The Author(s) https://creativecommons.org/licenses/by/4.0/This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
von Eiff, Celina Isabelle
Frühholz, Sascha
Korth, Daniela
Guntinas-Lichius, Orlando
Schweinberger, Stefan Robert
Crossmodal benefits to vocal emotion perception in cochlear implant users
title Crossmodal benefits to vocal emotion perception in cochlear implant users
title_full Crossmodal benefits to vocal emotion perception in cochlear implant users
title_fullStr Crossmodal benefits to vocal emotion perception in cochlear implant users
title_full_unstemmed Crossmodal benefits to vocal emotion perception in cochlear implant users
title_short Crossmodal benefits to vocal emotion perception in cochlear implant users
title_sort crossmodal benefits to vocal emotion perception in cochlear implant users
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9791346/
https://www.ncbi.nlm.nih.gov/pubmed/36578321
http://dx.doi.org/10.1016/j.isci.2022.105711
work_keys_str_mv AT voneiffcelinaisabelle crossmodalbenefitstovocalemotionperceptionincochlearimplantusers
AT fruhholzsascha crossmodalbenefitstovocalemotionperceptionincochlearimplantusers
AT korthdaniela crossmodalbenefitstovocalemotionperceptionincochlearimplantusers
AT guntinaslichiusorlando crossmodalbenefitstovocalemotionperceptionincochlearimplantusers
AT schweinbergerstefanrobert crossmodalbenefitstovocalemotionperceptionincochlearimplantusers