Cargando…

Audiovisual Non-Verbal Dynamic Faces Elicit Converging fMRI and ERP Responses

In an everyday social interaction we automatically integrate another’s facial movements and vocalizations, be they linguistic or otherwise. This requires audiovisual integration of a continual barrage of sensory input—a phenomenon previously well-studied with human audiovisual speech, but not with n...

Descripción completa

Detalles Bibliográficos
Autores principales: Brefczynski-Lewis, Julie, Lowitszch, Svenja, Parsons, Michael, Lemieux, Susan, Puce, Aina
Formato: Texto
Lenguaje:English
Publicado: Springer US 2009
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2707948/
https://www.ncbi.nlm.nih.gov/pubmed/19384602
http://dx.doi.org/10.1007/s10548-009-0093-6
_version_ 1782169190132613120
author Brefczynski-Lewis, Julie
Lowitszch, Svenja
Parsons, Michael
Lemieux, Susan
Puce, Aina
author_facet Brefczynski-Lewis, Julie
Lowitszch, Svenja
Parsons, Michael
Lemieux, Susan
Puce, Aina
author_sort Brefczynski-Lewis, Julie
collection PubMed
description In an everyday social interaction we automatically integrate another’s facial movements and vocalizations, be they linguistic or otherwise. This requires audiovisual integration of a continual barrage of sensory input—a phenomenon previously well-studied with human audiovisual speech, but not with non-verbal vocalizations. Using both fMRI and ERPs, we assessed neural activity to viewing and listening to an animated female face producing non-verbal, human vocalizations (i.e. coughing, sneezing) under audio-only (AUD), visual-only (VIS) and audiovisual (AV) stimulus conditions, alternating with Rest (R). Underadditive effects occurred in regions dominant for sensory processing, which showed AV activation greater than the dominant modality alone. Right posterior temporal and parietal regions showed an AV maximum in which AV activation was greater than either modality alone, but not greater than the sum of the unisensory conditions. Other frontal and parietal regions showed Common-activation in which AV activation was the same as one or both unisensory conditions. ERP data showed an early superadditive effect (AV > AUD + VIS, no rest), mid-range underadditive effects for auditory N140 and face-sensitive N170, and late AV maximum and common-activation effects. Based on convergence between fMRI and ERP data, we propose a mechanism where a multisensory stimulus may be signaled or facilitated as early as 60 ms and facilitated in sensory-specific regions by increasing processing speed (at N170) and efficiency (decreasing amplitude in auditory and face-sensitive cortical activation and ERPs). Finally, higher-order processes are also altered, but in a more complex fashion. ELECTRONIC SUPPLEMENTARY MATERIAL: The online version of this article (doi:10.1007/s10548-009-0093-6) contains supplementary material, which is available to authorized users.
format Text
id pubmed-2707948
institution National Center for Biotechnology Information
language English
publishDate 2009
publisher Springer US
record_format MEDLINE/PubMed
spelling pubmed-27079482009-07-10 Audiovisual Non-Verbal Dynamic Faces Elicit Converging fMRI and ERP Responses Brefczynski-Lewis, Julie Lowitszch, Svenja Parsons, Michael Lemieux, Susan Puce, Aina Brain Topogr Original Paper In an everyday social interaction we automatically integrate another’s facial movements and vocalizations, be they linguistic or otherwise. This requires audiovisual integration of a continual barrage of sensory input—a phenomenon previously well-studied with human audiovisual speech, but not with non-verbal vocalizations. Using both fMRI and ERPs, we assessed neural activity to viewing and listening to an animated female face producing non-verbal, human vocalizations (i.e. coughing, sneezing) under audio-only (AUD), visual-only (VIS) and audiovisual (AV) stimulus conditions, alternating with Rest (R). Underadditive effects occurred in regions dominant for sensory processing, which showed AV activation greater than the dominant modality alone. Right posterior temporal and parietal regions showed an AV maximum in which AV activation was greater than either modality alone, but not greater than the sum of the unisensory conditions. Other frontal and parietal regions showed Common-activation in which AV activation was the same as one or both unisensory conditions. ERP data showed an early superadditive effect (AV > AUD + VIS, no rest), mid-range underadditive effects for auditory N140 and face-sensitive N170, and late AV maximum and common-activation effects. Based on convergence between fMRI and ERP data, we propose a mechanism where a multisensory stimulus may be signaled or facilitated as early as 60 ms and facilitated in sensory-specific regions by increasing processing speed (at N170) and efficiency (decreasing amplitude in auditory and face-sensitive cortical activation and ERPs). Finally, higher-order processes are also altered, but in a more complex fashion. ELECTRONIC SUPPLEMENTARY MATERIAL: The online version of this article (doi:10.1007/s10548-009-0093-6) contains supplementary material, which is available to authorized users. Springer US 2009-04-23 2009-05 /pmc/articles/PMC2707948/ /pubmed/19384602 http://dx.doi.org/10.1007/s10548-009-0093-6 Text en © The Author(s) 2009
spellingShingle Original Paper
Brefczynski-Lewis, Julie
Lowitszch, Svenja
Parsons, Michael
Lemieux, Susan
Puce, Aina
Audiovisual Non-Verbal Dynamic Faces Elicit Converging fMRI and ERP Responses
title Audiovisual Non-Verbal Dynamic Faces Elicit Converging fMRI and ERP Responses
title_full Audiovisual Non-Verbal Dynamic Faces Elicit Converging fMRI and ERP Responses
title_fullStr Audiovisual Non-Verbal Dynamic Faces Elicit Converging fMRI and ERP Responses
title_full_unstemmed Audiovisual Non-Verbal Dynamic Faces Elicit Converging fMRI and ERP Responses
title_short Audiovisual Non-Verbal Dynamic Faces Elicit Converging fMRI and ERP Responses
title_sort audiovisual non-verbal dynamic faces elicit converging fmri and erp responses
topic Original Paper
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2707948/
https://www.ncbi.nlm.nih.gov/pubmed/19384602
http://dx.doi.org/10.1007/s10548-009-0093-6
work_keys_str_mv AT brefczynskilewisjulie audiovisualnonverbaldynamicfaceselicitconvergingfmrianderpresponses
AT lowitszchsvenja audiovisualnonverbaldynamicfaceselicitconvergingfmrianderpresponses
AT parsonsmichael audiovisualnonverbaldynamicfaceselicitconvergingfmrianderpresponses
AT lemieuxsusan audiovisualnonverbaldynamicfaceselicitconvergingfmrianderpresponses
AT puceaina audiovisualnonverbaldynamicfaceselicitconvergingfmrianderpresponses