Cargando…

Integrating prediction errors at two time scales permits rapid recalibration of speech sound categories

Speech perception presumably arises from internal models of how specific sensory features are associated with speech sounds. These features change constantly (e.g. different speakers, articulation modes etc.), and listeners need to recalibrate their internal models by appropriately weighing new vers...

Descripción completa

Detalles Bibliográficos
Autores principales: Olasagasti, Itsaso, Giraud, Anne-Lise
Formato: Online Artículo Texto
Lenguaje:English
Publicado: eLife Sciences Publications, Ltd 2020
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7217692/
https://www.ncbi.nlm.nih.gov/pubmed/32223894
http://dx.doi.org/10.7554/eLife.44516
_version_ 1783532647900774400
author Olasagasti, Itsaso
Giraud, Anne-Lise
author_facet Olasagasti, Itsaso
Giraud, Anne-Lise
author_sort Olasagasti, Itsaso
collection PubMed
description Speech perception presumably arises from internal models of how specific sensory features are associated with speech sounds. These features change constantly (e.g. different speakers, articulation modes etc.), and listeners need to recalibrate their internal models by appropriately weighing new versus old evidence. Models of speech recalibration classically ignore this volatility. The effect of volatility in tasks where sensory cues were associated with arbitrary experimenter-defined categories were well described by models that continuously adapt the learning rate while keeping a single representation of the category. Using neurocomputational modelling we show that recalibration of natural speech sound categories is better described by representing the latter at different time scales. We illustrate our proposal by modeling fast recalibration of speech sounds after experiencing the McGurk effect. We propose that working representations of speech categories are driven both by their current environment and their long-term memory representations.
format Online
Article
Text
id pubmed-7217692
institution National Center for Biotechnology Information
language English
publishDate 2020
publisher eLife Sciences Publications, Ltd
record_format MEDLINE/PubMed
spelling pubmed-72176922020-05-13 Integrating prediction errors at two time scales permits rapid recalibration of speech sound categories Olasagasti, Itsaso Giraud, Anne-Lise eLife Neuroscience Speech perception presumably arises from internal models of how specific sensory features are associated with speech sounds. These features change constantly (e.g. different speakers, articulation modes etc.), and listeners need to recalibrate their internal models by appropriately weighing new versus old evidence. Models of speech recalibration classically ignore this volatility. The effect of volatility in tasks where sensory cues were associated with arbitrary experimenter-defined categories were well described by models that continuously adapt the learning rate while keeping a single representation of the category. Using neurocomputational modelling we show that recalibration of natural speech sound categories is better described by representing the latter at different time scales. We illustrate our proposal by modeling fast recalibration of speech sounds after experiencing the McGurk effect. We propose that working representations of speech categories are driven both by their current environment and their long-term memory representations. eLife Sciences Publications, Ltd 2020-03-30 /pmc/articles/PMC7217692/ /pubmed/32223894 http://dx.doi.org/10.7554/eLife.44516 Text en © 2020, Olasagasti and Giraud http://creativecommons.org/licenses/by/4.0/ http://creativecommons.org/licenses/by/4.0/This article is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0/) , which permits unrestricted use and redistribution provided that the original author and source are credited.
spellingShingle Neuroscience
Olasagasti, Itsaso
Giraud, Anne-Lise
Integrating prediction errors at two time scales permits rapid recalibration of speech sound categories
title Integrating prediction errors at two time scales permits rapid recalibration of speech sound categories
title_full Integrating prediction errors at two time scales permits rapid recalibration of speech sound categories
title_fullStr Integrating prediction errors at two time scales permits rapid recalibration of speech sound categories
title_full_unstemmed Integrating prediction errors at two time scales permits rapid recalibration of speech sound categories
title_short Integrating prediction errors at two time scales permits rapid recalibration of speech sound categories
title_sort integrating prediction errors at two time scales permits rapid recalibration of speech sound categories
topic Neuroscience
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7217692/
https://www.ncbi.nlm.nih.gov/pubmed/32223894
http://dx.doi.org/10.7554/eLife.44516
work_keys_str_mv AT olasagastiitsaso integratingpredictionerrorsattwotimescalespermitsrapidrecalibrationofspeechsoundcategories
AT giraudannelise integratingpredictionerrorsattwotimescalespermitsrapidrecalibrationofspeechsoundcategories