Cargando…

Brain-optimized extraction of complex sound features that drive continuous auditory perception

Understanding how the human brain processes auditory input remains a challenge. Traditionally, a distinction between lower- and higher-level sound features is made, but their definition depends on a specific theoretical framework and might not match the neural representation of sound. Here, we postu...

Descripción completa

Detalles Bibliográficos
Autores principales: Berezutskaya, Julia, Freudenburg, Zachary V., Güçlü, Umut, van Gerven, Marcel A. J., Ramsey, Nick F.
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Public Library of Science 2020
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7363106/
https://www.ncbi.nlm.nih.gov/pubmed/32614826
http://dx.doi.org/10.1371/journal.pcbi.1007992
_version_ 1783559610059194368
author Berezutskaya, Julia
Freudenburg, Zachary V.
Güçlü, Umut
van Gerven, Marcel A. J.
Ramsey, Nick F.
author_facet Berezutskaya, Julia
Freudenburg, Zachary V.
Güçlü, Umut
van Gerven, Marcel A. J.
Ramsey, Nick F.
author_sort Berezutskaya, Julia
collection PubMed
description Understanding how the human brain processes auditory input remains a challenge. Traditionally, a distinction between lower- and higher-level sound features is made, but their definition depends on a specific theoretical framework and might not match the neural representation of sound. Here, we postulate that constructing a data-driven neural model of auditory perception, with a minimum of theoretical assumptions about the relevant sound features, could provide an alternative approach and possibly a better match to the neural responses. We collected electrocorticography recordings from six patients who watched a long-duration feature film. The raw movie soundtrack was used to train an artificial neural network model to predict the associated neural responses. The model achieved high prediction accuracy and generalized well to a second dataset, where new participants watched a different film. The extracted bottom-up features captured acoustic properties that were specific to the type of sound and were associated with various response latency profiles and distinct cortical distributions. Specifically, several features encoded speech-related acoustic properties with some features exhibiting shorter latency profiles (associated with responses in posterior perisylvian cortex) and others exhibiting longer latency profiles (associated with responses in anterior perisylvian cortex). Our results support and extend the current view on speech perception by demonstrating the presence of temporal hierarchies in the perisylvian cortex and involvement of cortical sites outside of this region during audiovisual speech perception.
format Online
Article
Text
id pubmed-7363106
institution National Center for Biotechnology Information
language English
publishDate 2020
publisher Public Library of Science
record_format MEDLINE/PubMed
spelling pubmed-73631062020-07-27 Brain-optimized extraction of complex sound features that drive continuous auditory perception Berezutskaya, Julia Freudenburg, Zachary V. Güçlü, Umut van Gerven, Marcel A. J. Ramsey, Nick F. PLoS Comput Biol Research Article Understanding how the human brain processes auditory input remains a challenge. Traditionally, a distinction between lower- and higher-level sound features is made, but their definition depends on a specific theoretical framework and might not match the neural representation of sound. Here, we postulate that constructing a data-driven neural model of auditory perception, with a minimum of theoretical assumptions about the relevant sound features, could provide an alternative approach and possibly a better match to the neural responses. We collected electrocorticography recordings from six patients who watched a long-duration feature film. The raw movie soundtrack was used to train an artificial neural network model to predict the associated neural responses. The model achieved high prediction accuracy and generalized well to a second dataset, where new participants watched a different film. The extracted bottom-up features captured acoustic properties that were specific to the type of sound and were associated with various response latency profiles and distinct cortical distributions. Specifically, several features encoded speech-related acoustic properties with some features exhibiting shorter latency profiles (associated with responses in posterior perisylvian cortex) and others exhibiting longer latency profiles (associated with responses in anterior perisylvian cortex). Our results support and extend the current view on speech perception by demonstrating the presence of temporal hierarchies in the perisylvian cortex and involvement of cortical sites outside of this region during audiovisual speech perception. Public Library of Science 2020-07-02 /pmc/articles/PMC7363106/ /pubmed/32614826 http://dx.doi.org/10.1371/journal.pcbi.1007992 Text en © 2020 Berezutskaya et al http://creativecommons.org/licenses/by/4.0/ This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0/) , which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
spellingShingle Research Article
Berezutskaya, Julia
Freudenburg, Zachary V.
Güçlü, Umut
van Gerven, Marcel A. J.
Ramsey, Nick F.
Brain-optimized extraction of complex sound features that drive continuous auditory perception
title Brain-optimized extraction of complex sound features that drive continuous auditory perception
title_full Brain-optimized extraction of complex sound features that drive continuous auditory perception
title_fullStr Brain-optimized extraction of complex sound features that drive continuous auditory perception
title_full_unstemmed Brain-optimized extraction of complex sound features that drive continuous auditory perception
title_short Brain-optimized extraction of complex sound features that drive continuous auditory perception
title_sort brain-optimized extraction of complex sound features that drive continuous auditory perception
topic Research Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7363106/
https://www.ncbi.nlm.nih.gov/pubmed/32614826
http://dx.doi.org/10.1371/journal.pcbi.1007992
work_keys_str_mv AT berezutskayajulia brainoptimizedextractionofcomplexsoundfeaturesthatdrivecontinuousauditoryperception
AT freudenburgzacharyv brainoptimizedextractionofcomplexsoundfeaturesthatdrivecontinuousauditoryperception
AT gucluumut brainoptimizedextractionofcomplexsoundfeaturesthatdrivecontinuousauditoryperception
AT vangervenmarcelaj brainoptimizedextractionofcomplexsoundfeaturesthatdrivecontinuousauditoryperception
AT ramseynickf brainoptimizedextractionofcomplexsoundfeaturesthatdrivecontinuousauditoryperception