Cargando…
Formant Space Reconstruction From Brain Activity in Frontal and Temporal Regions Coding for Heard Vowels
Classical studies have isolated a distributed network of temporal and frontal areas engaged in the neural representation of speech perception and production. With modern literature arguing against unique roles for these cortical regions, different theories have favored either neural code-sharing or...
Autores principales: | , , , , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Frontiers Media S.A.
2019
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6383050/ https://www.ncbi.nlm.nih.gov/pubmed/30837851 http://dx.doi.org/10.3389/fnhum.2019.00032 |
_version_ | 1783396770574761984 |
---|---|
author | Rampinini, Alessandra Cecilia Handjaras, Giacomo Leo, Andrea Cecchetti, Luca Betta, Monica Marotta, Giovanna Ricciardi, Emiliano Pietrini, Pietro |
author_facet | Rampinini, Alessandra Cecilia Handjaras, Giacomo Leo, Andrea Cecchetti, Luca Betta, Monica Marotta, Giovanna Ricciardi, Emiliano Pietrini, Pietro |
author_sort | Rampinini, Alessandra Cecilia |
collection | PubMed |
description | Classical studies have isolated a distributed network of temporal and frontal areas engaged in the neural representation of speech perception and production. With modern literature arguing against unique roles for these cortical regions, different theories have favored either neural code-sharing or cortical space-sharing, thus trying to explain the intertwined spatial and functional organization of motor and acoustic components across the fronto-temporal cortical network. In this context, the focus of attention has recently shifted toward specific model fitting, aimed at motor and/or acoustic space reconstruction in brain activity within the language network. Here, we tested a model based on acoustic properties (formants), and one based on motor properties (articulation parameters), where model-free decoding of evoked fMRI activity during perception, imagery, and production of vowels had been successful. Results revealed that phonological information organizes around formant structure during the perception of vowels; interestingly, such a model was reconstructed in a broad temporal region, outside of the primary auditory cortex, but also in the pars triangularis of the left inferior frontal gyrus. Conversely, articulatory features were not associated with brain activity in these regions. Overall, our results call for a degree of interdependence based on acoustic information, between the frontal and temporal ends of the language network. |
format | Online Article Text |
id | pubmed-6383050 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2019 |
publisher | Frontiers Media S.A. |
record_format | MEDLINE/PubMed |
spelling | pubmed-63830502019-03-05 Formant Space Reconstruction From Brain Activity in Frontal and Temporal Regions Coding for Heard Vowels Rampinini, Alessandra Cecilia Handjaras, Giacomo Leo, Andrea Cecchetti, Luca Betta, Monica Marotta, Giovanna Ricciardi, Emiliano Pietrini, Pietro Front Hum Neurosci Neuroscience Classical studies have isolated a distributed network of temporal and frontal areas engaged in the neural representation of speech perception and production. With modern literature arguing against unique roles for these cortical regions, different theories have favored either neural code-sharing or cortical space-sharing, thus trying to explain the intertwined spatial and functional organization of motor and acoustic components across the fronto-temporal cortical network. In this context, the focus of attention has recently shifted toward specific model fitting, aimed at motor and/or acoustic space reconstruction in brain activity within the language network. Here, we tested a model based on acoustic properties (formants), and one based on motor properties (articulation parameters), where model-free decoding of evoked fMRI activity during perception, imagery, and production of vowels had been successful. Results revealed that phonological information organizes around formant structure during the perception of vowels; interestingly, such a model was reconstructed in a broad temporal region, outside of the primary auditory cortex, but also in the pars triangularis of the left inferior frontal gyrus. Conversely, articulatory features were not associated with brain activity in these regions. Overall, our results call for a degree of interdependence based on acoustic information, between the frontal and temporal ends of the language network. Frontiers Media S.A. 2019-02-08 /pmc/articles/PMC6383050/ /pubmed/30837851 http://dx.doi.org/10.3389/fnhum.2019.00032 Text en Copyright © 2019 Rampinini, Handjaras, Leo, Cecchetti, Betta, Marotta, Ricciardi and Pietrini. http://creativecommons.org/licenses/by/4.0/ This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms. |
spellingShingle | Neuroscience Rampinini, Alessandra Cecilia Handjaras, Giacomo Leo, Andrea Cecchetti, Luca Betta, Monica Marotta, Giovanna Ricciardi, Emiliano Pietrini, Pietro Formant Space Reconstruction From Brain Activity in Frontal and Temporal Regions Coding for Heard Vowels |
title | Formant Space Reconstruction From Brain Activity in Frontal and Temporal Regions Coding for Heard Vowels |
title_full | Formant Space Reconstruction From Brain Activity in Frontal and Temporal Regions Coding for Heard Vowels |
title_fullStr | Formant Space Reconstruction From Brain Activity in Frontal and Temporal Regions Coding for Heard Vowels |
title_full_unstemmed | Formant Space Reconstruction From Brain Activity in Frontal and Temporal Regions Coding for Heard Vowels |
title_short | Formant Space Reconstruction From Brain Activity in Frontal and Temporal Regions Coding for Heard Vowels |
title_sort | formant space reconstruction from brain activity in frontal and temporal regions coding for heard vowels |
topic | Neuroscience |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6383050/ https://www.ncbi.nlm.nih.gov/pubmed/30837851 http://dx.doi.org/10.3389/fnhum.2019.00032 |
work_keys_str_mv | AT rampininialessandracecilia formantspacereconstructionfrombrainactivityinfrontalandtemporalregionscodingforheardvowels AT handjarasgiacomo formantspacereconstructionfrombrainactivityinfrontalandtemporalregionscodingforheardvowels AT leoandrea formantspacereconstructionfrombrainactivityinfrontalandtemporalregionscodingforheardvowels AT cecchettiluca formantspacereconstructionfrombrainactivityinfrontalandtemporalregionscodingforheardvowels AT bettamonica formantspacereconstructionfrombrainactivityinfrontalandtemporalregionscodingforheardvowels AT marottagiovanna formantspacereconstructionfrombrainactivityinfrontalandtemporalregionscodingforheardvowels AT ricciardiemiliano formantspacereconstructionfrombrainactivityinfrontalandtemporalregionscodingforheardvowels AT pietrinipietro formantspacereconstructionfrombrainactivityinfrontalandtemporalregionscodingforheardvowels |