Cargando…

Time elapsed between choices in a probabilistic task correlates with repeating the same decision

Reinforcement learning causes an action that yields a positive outcome more likely to be taken in the future. Here, we investigate how the time elapsed from an action affects subsequent decisions. Groups of C57BL6/J mice were housed in IntelliCages with access to water and chow ad libitum; they also...

Descripción completa

Detalles Bibliográficos
Autores principales: Jabłońska, Judyta, Szumiec, Łukasz, Zieliński, Piotr, Rodriguez Parkitna, Jan
Formato: Online Artículo Texto
Lenguaje:English
Publicado: John Wiley and Sons Inc. 2021
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8248175/
https://www.ncbi.nlm.nih.gov/pubmed/33559232
http://dx.doi.org/10.1111/ejn.15144
Descripción
Sumario:Reinforcement learning causes an action that yields a positive outcome more likely to be taken in the future. Here, we investigate how the time elapsed from an action affects subsequent decisions. Groups of C57BL6/J mice were housed in IntelliCages with access to water and chow ad libitum; they also had access to bottles with a reward: saccharin solution, alcohol, or a mixture of the two. The probability of receiving a reward in two of the cage corners changed between 0.9 and 0.3 every 48 hr over a period of ~33 days. As expected, in most animals, the odds of repeating a corner choice were increased if that choice was previously rewarded. Interestingly, the time elapsed from the previous choice also influenced the probability of repeating the choice, and this effect was independent of previous outcome. Behavioral data were fitted to a series of reinforcement learning models. Best fits were achieved when the reward prediction update was coupled with separate learning rates from positive and negative outcomes and additionally a “fictitious” update of the expected value of the nonselected choice. Additional inclusion of a time‐dependent decay of the expected values improved the fit marginally in some cases.