Cargando…

Neural Encoding of Auditory Statistics

The human brain extracts statistical regularities embedded in real-world scenes to sift through the complexity stemming from changing dynamics and entwined uncertainty along multiple perceptual dimensions (e.g., pitch, timbre, location). While there is evidence that sensory dynamics along different...

Descripción completa

Detalles Bibliográficos
Autores principales: Skerritt-Davis, Benjamin, Elhilali, Mounya
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Society for Neuroscience 2021
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8336711/
https://www.ncbi.nlm.nih.gov/pubmed/34193552
http://dx.doi.org/10.1523/JNEUROSCI.1887-20.2021
_version_ 1783733365882486784
author Skerritt-Davis, Benjamin
Elhilali, Mounya
author_facet Skerritt-Davis, Benjamin
Elhilali, Mounya
author_sort Skerritt-Davis, Benjamin
collection PubMed
description The human brain extracts statistical regularities embedded in real-world scenes to sift through the complexity stemming from changing dynamics and entwined uncertainty along multiple perceptual dimensions (e.g., pitch, timbre, location). While there is evidence that sensory dynamics along different auditory dimensions are tracked independently by separate cortical networks, how these statistics are integrated to give rise to unified objects remains unknown, particularly in dynamic scenes that lack conspicuous coupling between features. Using tone sequences with stochastic regularities along spectral and spatial dimensions, this study examines behavioral and electrophysiological responses from human listeners (male and female) to changing statistics in auditory sequences and uses a computational model of predictive Bayesian inference to formulate multiple hypotheses for statistical integration across features. Neural responses reveal multiplexed brain responses reflecting both local statistics along individual features in frontocentral networks, together with global (object-level) processing in centroparietal networks. Independent tracking of local surprisal along each acoustic feature reveals linear modulation of neural responses, while global melody-level statistics follow a nonlinear integration of statistical beliefs across features to guide perception. Near identical results are obtained in separate experiments along spectral and spatial acoustic dimensions, suggesting a common mechanism for statistical inference in the brain. Potential variations in statistical integration strategies and memory deployment shed light on individual variability between listeners in terms of behavioral efficacy and fidelity of neural encoding of stochastic change in acoustic sequences. SIGNIFICANCE STATEMENT The world around us is complex and ever changing: in everyday listening, sound sources evolve along multiple dimensions, such as pitch, timbre, and spatial location, and they exhibit emergent statistical properties that change over time. In the face of this complexity, the brain builds an internal representation of the external world by collecting statistics from the sensory input along multiple dimensions. Using a Bayesian predictive inference model, this work considers alternative hypotheses for how statistics are combined across sensory dimensions. Behavioral and neural responses from human listeners show the brain multiplexes two representations, where local statistics along each feature linearly affect neural responses, and global statistics nonlinearly combine statistical beliefs across dimensions to shape perception of stochastic auditory sequences.
format Online
Article
Text
id pubmed-8336711
institution National Center for Biotechnology Information
language English
publishDate 2021
publisher Society for Neuroscience
record_format MEDLINE/PubMed
spelling pubmed-83367112021-08-05 Neural Encoding of Auditory Statistics Skerritt-Davis, Benjamin Elhilali, Mounya J Neurosci Research Articles The human brain extracts statistical regularities embedded in real-world scenes to sift through the complexity stemming from changing dynamics and entwined uncertainty along multiple perceptual dimensions (e.g., pitch, timbre, location). While there is evidence that sensory dynamics along different auditory dimensions are tracked independently by separate cortical networks, how these statistics are integrated to give rise to unified objects remains unknown, particularly in dynamic scenes that lack conspicuous coupling between features. Using tone sequences with stochastic regularities along spectral and spatial dimensions, this study examines behavioral and electrophysiological responses from human listeners (male and female) to changing statistics in auditory sequences and uses a computational model of predictive Bayesian inference to formulate multiple hypotheses for statistical integration across features. Neural responses reveal multiplexed brain responses reflecting both local statistics along individual features in frontocentral networks, together with global (object-level) processing in centroparietal networks. Independent tracking of local surprisal along each acoustic feature reveals linear modulation of neural responses, while global melody-level statistics follow a nonlinear integration of statistical beliefs across features to guide perception. Near identical results are obtained in separate experiments along spectral and spatial acoustic dimensions, suggesting a common mechanism for statistical inference in the brain. Potential variations in statistical integration strategies and memory deployment shed light on individual variability between listeners in terms of behavioral efficacy and fidelity of neural encoding of stochastic change in acoustic sequences. SIGNIFICANCE STATEMENT The world around us is complex and ever changing: in everyday listening, sound sources evolve along multiple dimensions, such as pitch, timbre, and spatial location, and they exhibit emergent statistical properties that change over time. In the face of this complexity, the brain builds an internal representation of the external world by collecting statistics from the sensory input along multiple dimensions. Using a Bayesian predictive inference model, this work considers alternative hypotheses for how statistics are combined across sensory dimensions. Behavioral and neural responses from human listeners show the brain multiplexes two representations, where local statistics along each feature linearly affect neural responses, and global statistics nonlinearly combine statistical beliefs across dimensions to shape perception of stochastic auditory sequences. Society for Neuroscience 2021-08-04 /pmc/articles/PMC8336711/ /pubmed/34193552 http://dx.doi.org/10.1523/JNEUROSCI.1887-20.2021 Text en Copyright © 2021 Skerritt-Davis and Elhilali https://creativecommons.org/licenses/by/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution 4.0 International license (https://creativecommons.org/licenses/by/4.0/) , which permits unrestricted use, distribution and reproduction in any medium provided that the original work is properly attributed.
spellingShingle Research Articles
Skerritt-Davis, Benjamin
Elhilali, Mounya
Neural Encoding of Auditory Statistics
title Neural Encoding of Auditory Statistics
title_full Neural Encoding of Auditory Statistics
title_fullStr Neural Encoding of Auditory Statistics
title_full_unstemmed Neural Encoding of Auditory Statistics
title_short Neural Encoding of Auditory Statistics
title_sort neural encoding of auditory statistics
topic Research Articles
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8336711/
https://www.ncbi.nlm.nih.gov/pubmed/34193552
http://dx.doi.org/10.1523/JNEUROSCI.1887-20.2021
work_keys_str_mv AT skerrittdavisbenjamin neuralencodingofauditorystatistics
AT elhilalimounya neuralencodingofauditorystatistics