Cargando…

Subcortical coding of predictable and unsupervised sound-context associations

Our environment is made of a myriad of stimuli present in combinations often patterned in predictable ways. For example, there is a strong association between where we are and the sounds we hear. Like many environmental patterns, sound-context associations are learned implicitly, in an unsupervised...

Descripción completa

Detalles Bibliográficos
Autores principales: Chen, Chi, Cruces-Solís, Hugo, Ertman, Alexandra, de Hoz, Livia
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Elsevier 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10663128/
https://www.ncbi.nlm.nih.gov/pubmed/38020811
http://dx.doi.org/10.1016/j.crneur.2023.100110
_version_ 1785138329130369024
author Chen, Chi
Cruces-Solís, Hugo
Ertman, Alexandra
de Hoz, Livia
author_facet Chen, Chi
Cruces-Solís, Hugo
Ertman, Alexandra
de Hoz, Livia
author_sort Chen, Chi
collection PubMed
description Our environment is made of a myriad of stimuli present in combinations often patterned in predictable ways. For example, there is a strong association between where we are and the sounds we hear. Like many environmental patterns, sound-context associations are learned implicitly, in an unsupervised manner, and are highly informative and predictive of normality. Yet, we know little about where and how unsupervised sound-context associations are coded in the brain. Here we measured plasticity in the auditory midbrain of mice living over days in an enriched task-less environment in which entering a context triggered sound with different degrees of predictability. Plasticity in the auditory midbrain, a hub of auditory input and multimodal feedback, developed over days and reflected learning of contextual information in a manner that depended on the predictability of the sound-context association and not on reinforcement. Plasticity manifested as an increase in response gain and tuning shift that correlated with a general increase in neuronal frequency discrimination. Thus, the auditory midbrain is sensitive to unsupervised predictable sound-context associations, revealing a subcortical engagement in the detection of contextual sounds. By increasing frequency resolution, this detection might facilitate the processing of behaviorally relevant foreground information described to occur in cortical auditory structures.
format Online
Article
Text
id pubmed-10663128
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher Elsevier
record_format MEDLINE/PubMed
spelling pubmed-106631282023-10-02 Subcortical coding of predictable and unsupervised sound-context associations Chen, Chi Cruces-Solís, Hugo Ertman, Alexandra de Hoz, Livia Curr Res Neurobiol Articles from the special issue: How Expectation Transforms Neural Representations and Perception, edited by Kerry Walker and Christopher I. Petkov Our environment is made of a myriad of stimuli present in combinations often patterned in predictable ways. For example, there is a strong association between where we are and the sounds we hear. Like many environmental patterns, sound-context associations are learned implicitly, in an unsupervised manner, and are highly informative and predictive of normality. Yet, we know little about where and how unsupervised sound-context associations are coded in the brain. Here we measured plasticity in the auditory midbrain of mice living over days in an enriched task-less environment in which entering a context triggered sound with different degrees of predictability. Plasticity in the auditory midbrain, a hub of auditory input and multimodal feedback, developed over days and reflected learning of contextual information in a manner that depended on the predictability of the sound-context association and not on reinforcement. Plasticity manifested as an increase in response gain and tuning shift that correlated with a general increase in neuronal frequency discrimination. Thus, the auditory midbrain is sensitive to unsupervised predictable sound-context associations, revealing a subcortical engagement in the detection of contextual sounds. By increasing frequency resolution, this detection might facilitate the processing of behaviorally relevant foreground information described to occur in cortical auditory structures. Elsevier 2023-10-02 /pmc/articles/PMC10663128/ /pubmed/38020811 http://dx.doi.org/10.1016/j.crneur.2023.100110 Text en © 2023 The Authors https://creativecommons.org/licenses/by-nc-nd/4.0/This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).
spellingShingle Articles from the special issue: How Expectation Transforms Neural Representations and Perception, edited by Kerry Walker and Christopher I. Petkov
Chen, Chi
Cruces-Solís, Hugo
Ertman, Alexandra
de Hoz, Livia
Subcortical coding of predictable and unsupervised sound-context associations
title Subcortical coding of predictable and unsupervised sound-context associations
title_full Subcortical coding of predictable and unsupervised sound-context associations
title_fullStr Subcortical coding of predictable and unsupervised sound-context associations
title_full_unstemmed Subcortical coding of predictable and unsupervised sound-context associations
title_short Subcortical coding of predictable and unsupervised sound-context associations
title_sort subcortical coding of predictable and unsupervised sound-context associations
topic Articles from the special issue: How Expectation Transforms Neural Representations and Perception, edited by Kerry Walker and Christopher I. Petkov
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10663128/
https://www.ncbi.nlm.nih.gov/pubmed/38020811
http://dx.doi.org/10.1016/j.crneur.2023.100110
work_keys_str_mv AT chenchi subcorticalcodingofpredictableandunsupervisedsoundcontextassociations
AT crucessolishugo subcorticalcodingofpredictableandunsupervisedsoundcontextassociations
AT ertmanalexandra subcorticalcodingofpredictableandunsupervisedsoundcontextassociations
AT dehozlivia subcorticalcodingofpredictableandunsupervisedsoundcontextassociations