Cargando…

Intermediate acoustic-to-semantic representations link behavioral and neural responses to natural sounds

Recognizing sounds implicates the cerebral transformation of input waveforms into semantic representations. Although past research identified the superior temporal gyrus (STG) as a crucial cortical region, the computational fingerprint of these cerebral transformations remains poorly characterized....

Descripción completa

Detalles Bibliográficos
Autores principales: Giordano, Bruno L., Esposito, Michele, Valente, Giancarlo, Formisano, Elia
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Nature Publishing Group US 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10076214/
https://www.ncbi.nlm.nih.gov/pubmed/36928634
http://dx.doi.org/10.1038/s41593-023-01285-9
_version_ 1785020084340654080
author Giordano, Bruno L.
Esposito, Michele
Valente, Giancarlo
Formisano, Elia
author_facet Giordano, Bruno L.
Esposito, Michele
Valente, Giancarlo
Formisano, Elia
author_sort Giordano, Bruno L.
collection PubMed
description Recognizing sounds implicates the cerebral transformation of input waveforms into semantic representations. Although past research identified the superior temporal gyrus (STG) as a crucial cortical region, the computational fingerprint of these cerebral transformations remains poorly characterized. Here, we exploit a model comparison framework and contrasted the ability of acoustic, semantic (continuous and categorical) and sound-to-event deep neural network representation models to predict perceived sound dissimilarity and 7 T human auditory cortex functional magnetic resonance imaging responses. We confirm that spectrotemporal modulations predict early auditory cortex (Heschl’s gyrus) responses, and that auditory dimensions (for example, loudness, periodicity) predict STG responses and perceived dissimilarity. Sound-to-event deep neural networks predict Heschl’s gyrus responses similar to acoustic models but, notably, they outperform all competing models at predicting both STG responses and perceived dissimilarity. Our findings indicate that STG entails intermediate acoustic-to-semantic sound representations that neither acoustic nor semantic models can account for. These representations are compositional in nature and relevant to behavior.
format Online
Article
Text
id pubmed-10076214
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher Nature Publishing Group US
record_format MEDLINE/PubMed
spelling pubmed-100762142023-04-07 Intermediate acoustic-to-semantic representations link behavioral and neural responses to natural sounds Giordano, Bruno L. Esposito, Michele Valente, Giancarlo Formisano, Elia Nat Neurosci Article Recognizing sounds implicates the cerebral transformation of input waveforms into semantic representations. Although past research identified the superior temporal gyrus (STG) as a crucial cortical region, the computational fingerprint of these cerebral transformations remains poorly characterized. Here, we exploit a model comparison framework and contrasted the ability of acoustic, semantic (continuous and categorical) and sound-to-event deep neural network representation models to predict perceived sound dissimilarity and 7 T human auditory cortex functional magnetic resonance imaging responses. We confirm that spectrotemporal modulations predict early auditory cortex (Heschl’s gyrus) responses, and that auditory dimensions (for example, loudness, periodicity) predict STG responses and perceived dissimilarity. Sound-to-event deep neural networks predict Heschl’s gyrus responses similar to acoustic models but, notably, they outperform all competing models at predicting both STG responses and perceived dissimilarity. Our findings indicate that STG entails intermediate acoustic-to-semantic sound representations that neither acoustic nor semantic models can account for. These representations are compositional in nature and relevant to behavior. Nature Publishing Group US 2023-03-16 2023 /pmc/articles/PMC10076214/ /pubmed/36928634 http://dx.doi.org/10.1038/s41593-023-01285-9 Text en © The Author(s) 2023 https://creativecommons.org/licenses/by/4.0/Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) .
spellingShingle Article
Giordano, Bruno L.
Esposito, Michele
Valente, Giancarlo
Formisano, Elia
Intermediate acoustic-to-semantic representations link behavioral and neural responses to natural sounds
title Intermediate acoustic-to-semantic representations link behavioral and neural responses to natural sounds
title_full Intermediate acoustic-to-semantic representations link behavioral and neural responses to natural sounds
title_fullStr Intermediate acoustic-to-semantic representations link behavioral and neural responses to natural sounds
title_full_unstemmed Intermediate acoustic-to-semantic representations link behavioral and neural responses to natural sounds
title_short Intermediate acoustic-to-semantic representations link behavioral and neural responses to natural sounds
title_sort intermediate acoustic-to-semantic representations link behavioral and neural responses to natural sounds
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10076214/
https://www.ncbi.nlm.nih.gov/pubmed/36928634
http://dx.doi.org/10.1038/s41593-023-01285-9
work_keys_str_mv AT giordanobrunol intermediateacoustictosemanticrepresentationslinkbehavioralandneuralresponsestonaturalsounds
AT espositomichele intermediateacoustictosemanticrepresentationslinkbehavioralandneuralresponsestonaturalsounds
AT valentegiancarlo intermediateacoustictosemanticrepresentationslinkbehavioralandneuralresponsestonaturalsounds
AT formisanoelia intermediateacoustictosemanticrepresentationslinkbehavioralandneuralresponsestonaturalsounds