Cargando…

A flexible and generalizable model of online latent-state learning

Many models of classical conditioning fail to describe important phenomena, notably the rapid return of fear after extinction. To address this shortfall, evidence converged on the idea that learning agents rely on latent-state inferences, i.e. an ability to index disparate associations from cues to...

Descripción completa

Detalles Bibliográficos
Autores principales: Cochran, Amy L., Cisler, Josh M.
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Public Library of Science 2019
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6762208/
https://www.ncbi.nlm.nih.gov/pubmed/31525176
http://dx.doi.org/10.1371/journal.pcbi.1007331
_version_ 1783454167755390976
author Cochran, Amy L.
Cisler, Josh M.
author_facet Cochran, Amy L.
Cisler, Josh M.
author_sort Cochran, Amy L.
collection PubMed
description Many models of classical conditioning fail to describe important phenomena, notably the rapid return of fear after extinction. To address this shortfall, evidence converged on the idea that learning agents rely on latent-state inferences, i.e. an ability to index disparate associations from cues to rewards (or penalties) and infer which index (i.e. latent state) is presently active. Our goal was to develop a model of latent-state inferences that uses latent states to predict rewards from cues efficiently and that can describe behavior in a diverse set of experiments. The resulting model combines a Rescorla-Wagner rule, for which updates to associations are proportional to prediction error, with an approximate Bayesian rule, for which beliefs in latent states are proportional to prior beliefs and an approximate likelihood based on current associations. In simulation, we demonstrate the model’s ability to reproduce learning effects both famously explained and not explained by the Rescorla-Wagner model, including rapid return of fear after extinction, the Hall-Pearce effect, partial reinforcement extinction effect, backwards blocking, and memory modification. Lastly, we derive our model as an online algorithm to maximum likelihood estimation, demonstrating it is an efficient approach to outcome prediction. Establishing such a framework is a key step towards quantifying normative and pathological ranges of latent-state inferences in various contexts.
format Online
Article
Text
id pubmed-6762208
institution National Center for Biotechnology Information
language English
publishDate 2019
publisher Public Library of Science
record_format MEDLINE/PubMed
spelling pubmed-67622082019-10-11 A flexible and generalizable model of online latent-state learning Cochran, Amy L. Cisler, Josh M. PLoS Comput Biol Research Article Many models of classical conditioning fail to describe important phenomena, notably the rapid return of fear after extinction. To address this shortfall, evidence converged on the idea that learning agents rely on latent-state inferences, i.e. an ability to index disparate associations from cues to rewards (or penalties) and infer which index (i.e. latent state) is presently active. Our goal was to develop a model of latent-state inferences that uses latent states to predict rewards from cues efficiently and that can describe behavior in a diverse set of experiments. The resulting model combines a Rescorla-Wagner rule, for which updates to associations are proportional to prediction error, with an approximate Bayesian rule, for which beliefs in latent states are proportional to prior beliefs and an approximate likelihood based on current associations. In simulation, we demonstrate the model’s ability to reproduce learning effects both famously explained and not explained by the Rescorla-Wagner model, including rapid return of fear after extinction, the Hall-Pearce effect, partial reinforcement extinction effect, backwards blocking, and memory modification. Lastly, we derive our model as an online algorithm to maximum likelihood estimation, demonstrating it is an efficient approach to outcome prediction. Establishing such a framework is a key step towards quantifying normative and pathological ranges of latent-state inferences in various contexts. Public Library of Science 2019-09-16 /pmc/articles/PMC6762208/ /pubmed/31525176 http://dx.doi.org/10.1371/journal.pcbi.1007331 Text en © 2019 Cochran, Cisler http://creativecommons.org/licenses/by/4.0/ This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0/) , which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
spellingShingle Research Article
Cochran, Amy L.
Cisler, Josh M.
A flexible and generalizable model of online latent-state learning
title A flexible and generalizable model of online latent-state learning
title_full A flexible and generalizable model of online latent-state learning
title_fullStr A flexible and generalizable model of online latent-state learning
title_full_unstemmed A flexible and generalizable model of online latent-state learning
title_short A flexible and generalizable model of online latent-state learning
title_sort flexible and generalizable model of online latent-state learning
topic Research Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6762208/
https://www.ncbi.nlm.nih.gov/pubmed/31525176
http://dx.doi.org/10.1371/journal.pcbi.1007331
work_keys_str_mv AT cochranamyl aflexibleandgeneralizablemodelofonlinelatentstatelearning
AT cislerjoshm aflexibleandgeneralizablemodelofonlinelatentstatelearning
AT cochranamyl flexibleandgeneralizablemodelofonlinelatentstatelearning
AT cislerjoshm flexibleandgeneralizablemodelofonlinelatentstatelearning