Cargando…

Large Scale Functional Brain Networks Underlying Temporal Integration of Audio-Visual Speech Perception: An EEG Study

Observable lip movements of the speaker influence perception of auditory speech. A classical example of this influence is reported by listeners who perceive an illusory (cross-modal) speech sound (McGurk-effect) when presented with incongruent audio-visual (AV) speech stimuli. Recent neuroimaging st...

Descripción completa

Detalles Bibliográficos
Autores principales: Kumar, G. Vinodh, Halder, Tamesh, Jaiswal, Amit K., Mukherjee, Abhishek, Roy, Dipanjan, Banerjee, Arpan
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Frontiers Media S.A. 2016
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5062921/
https://www.ncbi.nlm.nih.gov/pubmed/27790169
http://dx.doi.org/10.3389/fpsyg.2016.01558
_version_ 1782459875979165696
author Kumar, G. Vinodh
Halder, Tamesh
Jaiswal, Amit K.
Mukherjee, Abhishek
Roy, Dipanjan
Banerjee, Arpan
author_facet Kumar, G. Vinodh
Halder, Tamesh
Jaiswal, Amit K.
Mukherjee, Abhishek
Roy, Dipanjan
Banerjee, Arpan
author_sort Kumar, G. Vinodh
collection PubMed
description Observable lip movements of the speaker influence perception of auditory speech. A classical example of this influence is reported by listeners who perceive an illusory (cross-modal) speech sound (McGurk-effect) when presented with incongruent audio-visual (AV) speech stimuli. Recent neuroimaging studies of AV speech perception accentuate the role of frontal, parietal, and the integrative brain sites in the vicinity of the superior temporal sulcus (STS) for multisensory speech perception. However, if and how does the network across the whole brain participates during multisensory perception processing remains an open question. We posit that a large-scale functional connectivity among the neural population situated in distributed brain sites may provide valuable insights involved in processing and fusing of AV speech. Varying the psychophysical parameters in tandem with electroencephalogram (EEG) recordings, we exploited the trial-by-trial perceptual variability of incongruent audio-visual (AV) speech stimuli to identify the characteristics of the large-scale cortical network that facilitates multisensory perception during synchronous and asynchronous AV speech. We evaluated the spectral landscape of EEG signals during multisensory speech perception at varying AV lags. Functional connectivity dynamics for all sensor pairs was computed using the time-frequency global coherence, the vector sum of pairwise coherence changes over time. During synchronous AV speech, we observed enhanced global gamma-band coherence and decreased alpha and beta-band coherence underlying cross-modal (illusory) perception compared to unisensory perception around a temporal window of 300–600 ms following onset of stimuli. During asynchronous speech stimuli, a global broadband coherence was observed during cross-modal perception at earlier times along with pre-stimulus decreases of lower frequency power, e.g., alpha rhythms for positive AV lags and theta rhythms for negative AV lags. Thus, our study indicates that the temporal integration underlying multisensory speech perception requires to be understood in the framework of large-scale functional brain network mechanisms in addition to the established cortical loci of multisensory speech perception.
format Online
Article
Text
id pubmed-5062921
institution National Center for Biotechnology Information
language English
publishDate 2016
publisher Frontiers Media S.A.
record_format MEDLINE/PubMed
spelling pubmed-50629212016-10-27 Large Scale Functional Brain Networks Underlying Temporal Integration of Audio-Visual Speech Perception: An EEG Study Kumar, G. Vinodh Halder, Tamesh Jaiswal, Amit K. Mukherjee, Abhishek Roy, Dipanjan Banerjee, Arpan Front Psychol Psychology Observable lip movements of the speaker influence perception of auditory speech. A classical example of this influence is reported by listeners who perceive an illusory (cross-modal) speech sound (McGurk-effect) when presented with incongruent audio-visual (AV) speech stimuli. Recent neuroimaging studies of AV speech perception accentuate the role of frontal, parietal, and the integrative brain sites in the vicinity of the superior temporal sulcus (STS) for multisensory speech perception. However, if and how does the network across the whole brain participates during multisensory perception processing remains an open question. We posit that a large-scale functional connectivity among the neural population situated in distributed brain sites may provide valuable insights involved in processing and fusing of AV speech. Varying the psychophysical parameters in tandem with electroencephalogram (EEG) recordings, we exploited the trial-by-trial perceptual variability of incongruent audio-visual (AV) speech stimuli to identify the characteristics of the large-scale cortical network that facilitates multisensory perception during synchronous and asynchronous AV speech. We evaluated the spectral landscape of EEG signals during multisensory speech perception at varying AV lags. Functional connectivity dynamics for all sensor pairs was computed using the time-frequency global coherence, the vector sum of pairwise coherence changes over time. During synchronous AV speech, we observed enhanced global gamma-band coherence and decreased alpha and beta-band coherence underlying cross-modal (illusory) perception compared to unisensory perception around a temporal window of 300–600 ms following onset of stimuli. During asynchronous speech stimuli, a global broadband coherence was observed during cross-modal perception at earlier times along with pre-stimulus decreases of lower frequency power, e.g., alpha rhythms for positive AV lags and theta rhythms for negative AV lags. Thus, our study indicates that the temporal integration underlying multisensory speech perception requires to be understood in the framework of large-scale functional brain network mechanisms in addition to the established cortical loci of multisensory speech perception. Frontiers Media S.A. 2016-10-13 /pmc/articles/PMC5062921/ /pubmed/27790169 http://dx.doi.org/10.3389/fpsyg.2016.01558 Text en Copyright © 2016 Kumar, Halder, Jaiswal, Mukherjee, Roy and Banerjee. http://creativecommons.org/licenses/by/4.0/ This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
spellingShingle Psychology
Kumar, G. Vinodh
Halder, Tamesh
Jaiswal, Amit K.
Mukherjee, Abhishek
Roy, Dipanjan
Banerjee, Arpan
Large Scale Functional Brain Networks Underlying Temporal Integration of Audio-Visual Speech Perception: An EEG Study
title Large Scale Functional Brain Networks Underlying Temporal Integration of Audio-Visual Speech Perception: An EEG Study
title_full Large Scale Functional Brain Networks Underlying Temporal Integration of Audio-Visual Speech Perception: An EEG Study
title_fullStr Large Scale Functional Brain Networks Underlying Temporal Integration of Audio-Visual Speech Perception: An EEG Study
title_full_unstemmed Large Scale Functional Brain Networks Underlying Temporal Integration of Audio-Visual Speech Perception: An EEG Study
title_short Large Scale Functional Brain Networks Underlying Temporal Integration of Audio-Visual Speech Perception: An EEG Study
title_sort large scale functional brain networks underlying temporal integration of audio-visual speech perception: an eeg study
topic Psychology
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5062921/
https://www.ncbi.nlm.nih.gov/pubmed/27790169
http://dx.doi.org/10.3389/fpsyg.2016.01558
work_keys_str_mv AT kumargvinodh largescalefunctionalbrainnetworksunderlyingtemporalintegrationofaudiovisualspeechperceptionaneegstudy
AT haldertamesh largescalefunctionalbrainnetworksunderlyingtemporalintegrationofaudiovisualspeechperceptionaneegstudy
AT jaiswalamitk largescalefunctionalbrainnetworksunderlyingtemporalintegrationofaudiovisualspeechperceptionaneegstudy
AT mukherjeeabhishek largescalefunctionalbrainnetworksunderlyingtemporalintegrationofaudiovisualspeechperceptionaneegstudy
AT roydipanjan largescalefunctionalbrainnetworksunderlyingtemporalintegrationofaudiovisualspeechperceptionaneegstudy
AT banerjeearpan largescalefunctionalbrainnetworksunderlyingtemporalintegrationofaudiovisualspeechperceptionaneegstudy