Cargando…

Multisensory and modality specific processing of visual speech in different regions of the premotor cortex

Behavioral and neuroimaging studies have demonstrated that brain regions involved with speech production also support speech perception, especially under degraded conditions. The premotor cortex (PMC) has been shown to be active during both observation and execution of action (“Mirror System” proper...

Descripción completa

Detalles Bibliográficos
Autores principales: Callan, Daniel E., Jones, Jeffery A., Callan, Akiko
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Frontiers Media S.A. 2014
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4017150/
https://www.ncbi.nlm.nih.gov/pubmed/24860526
http://dx.doi.org/10.3389/fpsyg.2014.00389
_version_ 1782315615040569344
author Callan, Daniel E.
Jones, Jeffery A.
Callan, Akiko
author_facet Callan, Daniel E.
Jones, Jeffery A.
Callan, Akiko
author_sort Callan, Daniel E.
collection PubMed
description Behavioral and neuroimaging studies have demonstrated that brain regions involved with speech production also support speech perception, especially under degraded conditions. The premotor cortex (PMC) has been shown to be active during both observation and execution of action (“Mirror System” properties), and may facilitate speech perception by mapping unimodal and multimodal sensory features onto articulatory speech gestures. For this functional magnetic resonance imaging (fMRI) study, participants identified vowels produced by a speaker in audio-visual (saw the speaker's articulating face and heard her voice), visual only (only saw the speaker's articulating face), and audio only (only heard the speaker's voice) conditions with varying audio signal-to-noise ratios in order to determine the regions of the PMC involved with multisensory and modality specific processing of visual speech gestures. The task was designed so that identification could be made with a high level of accuracy from visual only stimuli to control for task difficulty and differences in intelligibility. The results of the functional magnetic resonance imaging (fMRI) analysis for visual only and audio-visual conditions showed overlapping activity in inferior frontal gyrus and PMC. The left ventral inferior premotor cortex (PMvi) showed properties of multimodal (audio-visual) enhancement with a degraded auditory signal. The left inferior parietal lobule and right cerebellum also showed these properties. The left ventral superior and dorsal premotor cortex (PMvs/PMd) did not show this multisensory enhancement effect, but there was greater activity for the visual only over audio-visual conditions in these areas. The results suggest that the inferior regions of the ventral premotor cortex are involved with integrating multisensory information, whereas, more superior and dorsal regions of the PMC are involved with mapping unimodal (in this case visual) sensory features of the speech signal with articulatory speech gestures.
format Online
Article
Text
id pubmed-4017150
institution National Center for Biotechnology Information
language English
publishDate 2014
publisher Frontiers Media S.A.
record_format MEDLINE/PubMed
spelling pubmed-40171502014-05-23 Multisensory and modality specific processing of visual speech in different regions of the premotor cortex Callan, Daniel E. Jones, Jeffery A. Callan, Akiko Front Psychol Psychology Behavioral and neuroimaging studies have demonstrated that brain regions involved with speech production also support speech perception, especially under degraded conditions. The premotor cortex (PMC) has been shown to be active during both observation and execution of action (“Mirror System” properties), and may facilitate speech perception by mapping unimodal and multimodal sensory features onto articulatory speech gestures. For this functional magnetic resonance imaging (fMRI) study, participants identified vowels produced by a speaker in audio-visual (saw the speaker's articulating face and heard her voice), visual only (only saw the speaker's articulating face), and audio only (only heard the speaker's voice) conditions with varying audio signal-to-noise ratios in order to determine the regions of the PMC involved with multisensory and modality specific processing of visual speech gestures. The task was designed so that identification could be made with a high level of accuracy from visual only stimuli to control for task difficulty and differences in intelligibility. The results of the functional magnetic resonance imaging (fMRI) analysis for visual only and audio-visual conditions showed overlapping activity in inferior frontal gyrus and PMC. The left ventral inferior premotor cortex (PMvi) showed properties of multimodal (audio-visual) enhancement with a degraded auditory signal. The left inferior parietal lobule and right cerebellum also showed these properties. The left ventral superior and dorsal premotor cortex (PMvs/PMd) did not show this multisensory enhancement effect, but there was greater activity for the visual only over audio-visual conditions in these areas. The results suggest that the inferior regions of the ventral premotor cortex are involved with integrating multisensory information, whereas, more superior and dorsal regions of the PMC are involved with mapping unimodal (in this case visual) sensory features of the speech signal with articulatory speech gestures. Frontiers Media S.A. 2014-05-05 /pmc/articles/PMC4017150/ /pubmed/24860526 http://dx.doi.org/10.3389/fpsyg.2014.00389 Text en Copyright © 2014 Callan, Jones and Callan. http://creativecommons.org/licenses/by/3.0/ This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
spellingShingle Psychology
Callan, Daniel E.
Jones, Jeffery A.
Callan, Akiko
Multisensory and modality specific processing of visual speech in different regions of the premotor cortex
title Multisensory and modality specific processing of visual speech in different regions of the premotor cortex
title_full Multisensory and modality specific processing of visual speech in different regions of the premotor cortex
title_fullStr Multisensory and modality specific processing of visual speech in different regions of the premotor cortex
title_full_unstemmed Multisensory and modality specific processing of visual speech in different regions of the premotor cortex
title_short Multisensory and modality specific processing of visual speech in different regions of the premotor cortex
title_sort multisensory and modality specific processing of visual speech in different regions of the premotor cortex
topic Psychology
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4017150/
https://www.ncbi.nlm.nih.gov/pubmed/24860526
http://dx.doi.org/10.3389/fpsyg.2014.00389
work_keys_str_mv AT callandaniele multisensoryandmodalityspecificprocessingofvisualspeechindifferentregionsofthepremotorcortex
AT jonesjefferya multisensoryandmodalityspecificprocessingofvisualspeechindifferentregionsofthepremotorcortex
AT callanakiko multisensoryandmodalityspecificprocessingofvisualspeechindifferentregionsofthepremotorcortex