Cargando…

Predictive Ensemble Decoding of Acoustical Features Explains Context-Dependent Receptive Fields

A primary goal of auditory neuroscience is to identify the sound features extracted and represented by auditory neurons. Linear encoding models, which describe neural responses as a function of the stimulus, have been primarily used for this purpose. Here, we provide theoretical arguments and experi...

Descripción completa

Detalles Bibliográficos
Autores principales: Yildiz, Izzet B., Mesgarani, Nima, Deneve, Sophie
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Society for Neuroscience 2016
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5148225/
https://www.ncbi.nlm.nih.gov/pubmed/27927954
http://dx.doi.org/10.1523/JNEUROSCI.4648-15.2016
_version_ 1782473807583248384
author Yildiz, Izzet B.
Mesgarani, Nima
Deneve, Sophie
author_facet Yildiz, Izzet B.
Mesgarani, Nima
Deneve, Sophie
author_sort Yildiz, Izzet B.
collection PubMed
description A primary goal of auditory neuroscience is to identify the sound features extracted and represented by auditory neurons. Linear encoding models, which describe neural responses as a function of the stimulus, have been primarily used for this purpose. Here, we provide theoretical arguments and experimental evidence in support of an alternative approach, based on decoding the stimulus from the neural response. We used a Bayesian normative approach to predict the responses of neurons detecting relevant auditory features, despite ambiguities and noise. We compared the model predictions to recordings from the primary auditory cortex of ferrets and found that: (1) the decoding filters of auditory neurons resemble the filters learned from the statistics of speech sounds; (2) the decoding model captures the dynamics of responses better than a linear encoding model of similar complexity; and (3) the decoding model accounts for the accuracy with which the stimulus is represented in neural activity, whereas linear encoding model performs very poorly. Most importantly, our model predicts that neuronal responses are fundamentally shaped by “explaining away,” a divisive competition between alternative interpretations of the auditory scene. SIGNIFICANCE STATEMENT Neural responses in the auditory cortex are dynamic, nonlinear, and hard to predict. Traditionally, encoding models have been used to describe neural responses as a function of the stimulus. However, in addition to external stimulation, neural activity is strongly modulated by the responses of other neurons in the network. We hypothesized that auditory neurons aim to collectively decode their stimulus. In particular, a stimulus feature that is decoded (or explained away) by one neuron is not explained by another. We demonstrated that this novel Bayesian decoding model is better at capturing the dynamic responses of cortical neurons in ferrets. Whereas the linear encoding model poorly reflects selectivity of neurons, the decoding model can account for the strong nonlinearities observed in neural data.
format Online
Article
Text
id pubmed-5148225
institution National Center for Biotechnology Information
language English
publishDate 2016
publisher Society for Neuroscience
record_format MEDLINE/PubMed
spelling pubmed-51482252016-12-28 Predictive Ensemble Decoding of Acoustical Features Explains Context-Dependent Receptive Fields Yildiz, Izzet B. Mesgarani, Nima Deneve, Sophie J Neurosci Research Articles A primary goal of auditory neuroscience is to identify the sound features extracted and represented by auditory neurons. Linear encoding models, which describe neural responses as a function of the stimulus, have been primarily used for this purpose. Here, we provide theoretical arguments and experimental evidence in support of an alternative approach, based on decoding the stimulus from the neural response. We used a Bayesian normative approach to predict the responses of neurons detecting relevant auditory features, despite ambiguities and noise. We compared the model predictions to recordings from the primary auditory cortex of ferrets and found that: (1) the decoding filters of auditory neurons resemble the filters learned from the statistics of speech sounds; (2) the decoding model captures the dynamics of responses better than a linear encoding model of similar complexity; and (3) the decoding model accounts for the accuracy with which the stimulus is represented in neural activity, whereas linear encoding model performs very poorly. Most importantly, our model predicts that neuronal responses are fundamentally shaped by “explaining away,” a divisive competition between alternative interpretations of the auditory scene. SIGNIFICANCE STATEMENT Neural responses in the auditory cortex are dynamic, nonlinear, and hard to predict. Traditionally, encoding models have been used to describe neural responses as a function of the stimulus. However, in addition to external stimulation, neural activity is strongly modulated by the responses of other neurons in the network. We hypothesized that auditory neurons aim to collectively decode their stimulus. In particular, a stimulus feature that is decoded (or explained away) by one neuron is not explained by another. We demonstrated that this novel Bayesian decoding model is better at capturing the dynamic responses of cortical neurons in ferrets. Whereas the linear encoding model poorly reflects selectivity of neurons, the decoding model can account for the strong nonlinearities observed in neural data. Society for Neuroscience 2016-12-07 /pmc/articles/PMC5148225/ /pubmed/27927954 http://dx.doi.org/10.1523/JNEUROSCI.4648-15.2016 Text en Copyright © 2016 Yildiz et al. https://creativecommons.org/licenses/by/4.0/ This is an Open Access article distributed under the terms of the Creative Commons Attribution License Creative Commons Attribution 4.0 International (https://creativecommons.org/licenses/by/4.0/) , which permits unrestricted use, distribution and reproduction in any medium provided that the original work is properly attributed.
spellingShingle Research Articles
Yildiz, Izzet B.
Mesgarani, Nima
Deneve, Sophie
Predictive Ensemble Decoding of Acoustical Features Explains Context-Dependent Receptive Fields
title Predictive Ensemble Decoding of Acoustical Features Explains Context-Dependent Receptive Fields
title_full Predictive Ensemble Decoding of Acoustical Features Explains Context-Dependent Receptive Fields
title_fullStr Predictive Ensemble Decoding of Acoustical Features Explains Context-Dependent Receptive Fields
title_full_unstemmed Predictive Ensemble Decoding of Acoustical Features Explains Context-Dependent Receptive Fields
title_short Predictive Ensemble Decoding of Acoustical Features Explains Context-Dependent Receptive Fields
title_sort predictive ensemble decoding of acoustical features explains context-dependent receptive fields
topic Research Articles
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5148225/
https://www.ncbi.nlm.nih.gov/pubmed/27927954
http://dx.doi.org/10.1523/JNEUROSCI.4648-15.2016
work_keys_str_mv AT yildizizzetb predictiveensembledecodingofacousticalfeaturesexplainscontextdependentreceptivefields
AT mesgaraninima predictiveensembledecodingofacousticalfeaturesexplainscontextdependentreceptivefields
AT denevesophie predictiveensembledecodingofacousticalfeaturesexplainscontextdependentreceptivefields