Cargando…

A Supramodal Neural Network for Speech and Gesture Semantics: An fMRI Study

In a natural setting, speech is often accompanied by gestures. As language, speech-accompanying iconic gestures to some extent convey semantic information. However, if comprehension of the information contained in both the auditory and visual modality depends on same or different brain-networks is q...

Descripción completa

Detalles Bibliográficos
Autores principales: Straube, Benjamin, Green, Antonia, Weis, Susanne, Kircher, Tilo
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Public Library of Science 2012
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3511386/
https://www.ncbi.nlm.nih.gov/pubmed/23226488
http://dx.doi.org/10.1371/journal.pone.0051207
_version_ 1782251596592185344
author Straube, Benjamin
Green, Antonia
Weis, Susanne
Kircher, Tilo
author_facet Straube, Benjamin
Green, Antonia
Weis, Susanne
Kircher, Tilo
author_sort Straube, Benjamin
collection PubMed
description In a natural setting, speech is often accompanied by gestures. As language, speech-accompanying iconic gestures to some extent convey semantic information. However, if comprehension of the information contained in both the auditory and visual modality depends on same or different brain-networks is quite unknown. In this fMRI study, we aimed at identifying the cortical areas engaged in supramodal processing of semantic information. BOLD changes were recorded in 18 healthy right-handed male subjects watching video clips showing an actor who either performed speech (S, acoustic) or gestures (G, visual) in more (+) or less (−) meaningful varieties. In the experimental conditions familiar speech or isolated iconic gestures were presented; during the visual control condition the volunteers watched meaningless gestures (G−), while during the acoustic control condition a foreign language was presented (S−). The conjunction of the visual and acoustic semantic processing revealed activations extending from the left inferior frontal gyrus to the precentral gyrus, and included bilateral posterior temporal regions. We conclude that proclaiming this frontotemporal network the brain's core language system is to take too narrow a view. Our results rather indicate that these regions constitute a supramodal semantic processing network.
format Online
Article
Text
id pubmed-3511386
institution National Center for Biotechnology Information
language English
publishDate 2012
publisher Public Library of Science
record_format MEDLINE/PubMed
spelling pubmed-35113862012-12-05 A Supramodal Neural Network for Speech and Gesture Semantics: An fMRI Study Straube, Benjamin Green, Antonia Weis, Susanne Kircher, Tilo PLoS One Research Article In a natural setting, speech is often accompanied by gestures. As language, speech-accompanying iconic gestures to some extent convey semantic information. However, if comprehension of the information contained in both the auditory and visual modality depends on same or different brain-networks is quite unknown. In this fMRI study, we aimed at identifying the cortical areas engaged in supramodal processing of semantic information. BOLD changes were recorded in 18 healthy right-handed male subjects watching video clips showing an actor who either performed speech (S, acoustic) or gestures (G, visual) in more (+) or less (−) meaningful varieties. In the experimental conditions familiar speech or isolated iconic gestures were presented; during the visual control condition the volunteers watched meaningless gestures (G−), while during the acoustic control condition a foreign language was presented (S−). The conjunction of the visual and acoustic semantic processing revealed activations extending from the left inferior frontal gyrus to the precentral gyrus, and included bilateral posterior temporal regions. We conclude that proclaiming this frontotemporal network the brain's core language system is to take too narrow a view. Our results rather indicate that these regions constitute a supramodal semantic processing network. Public Library of Science 2012-11-30 /pmc/articles/PMC3511386/ /pubmed/23226488 http://dx.doi.org/10.1371/journal.pone.0051207 Text en © 2012 Straube et al http://creativecommons.org/licenses/by/4.0/ This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are properly credited.
spellingShingle Research Article
Straube, Benjamin
Green, Antonia
Weis, Susanne
Kircher, Tilo
A Supramodal Neural Network for Speech and Gesture Semantics: An fMRI Study
title A Supramodal Neural Network for Speech and Gesture Semantics: An fMRI Study
title_full A Supramodal Neural Network for Speech and Gesture Semantics: An fMRI Study
title_fullStr A Supramodal Neural Network for Speech and Gesture Semantics: An fMRI Study
title_full_unstemmed A Supramodal Neural Network for Speech and Gesture Semantics: An fMRI Study
title_short A Supramodal Neural Network for Speech and Gesture Semantics: An fMRI Study
title_sort supramodal neural network for speech and gesture semantics: an fmri study
topic Research Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3511386/
https://www.ncbi.nlm.nih.gov/pubmed/23226488
http://dx.doi.org/10.1371/journal.pone.0051207
work_keys_str_mv AT straubebenjamin asupramodalneuralnetworkforspeechandgesturesemanticsanfmristudy
AT greenantonia asupramodalneuralnetworkforspeechandgesturesemanticsanfmristudy
AT weissusanne asupramodalneuralnetworkforspeechandgesturesemanticsanfmristudy
AT kirchertilo asupramodalneuralnetworkforspeechandgesturesemanticsanfmristudy
AT straubebenjamin supramodalneuralnetworkforspeechandgesturesemanticsanfmristudy
AT greenantonia supramodalneuralnetworkforspeechandgesturesemanticsanfmristudy
AT weissusanne supramodalneuralnetworkforspeechandgesturesemanticsanfmristudy
AT kirchertilo supramodalneuralnetworkforspeechandgesturesemanticsanfmristudy