Cargando…

Why do valence asymmetries emerge in value learning? A reinforcement learning account

The Value Learning Task (VLT; e.g., Raymond & O’Brien, 2009) is widely used to investigate how acquired value impacts how we perceive and process stimuli. The task consists of a series of trials in which participants attempt to maximize accumulated winnings as they make choices from a pair of pr...

Descripción completa

Detalles Bibliográficos
Autores principales: Hao, Chenxu, Cabrera-Haro, Lilian E., Lin, Ziyong, Reuter-Lorenz, Patricia A., Lewis, Richard L.
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Springer US 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10390629/
https://www.ncbi.nlm.nih.gov/pubmed/36577934
http://dx.doi.org/10.3758/s13415-022-01050-8
_version_ 1785082519501144064
author Hao, Chenxu
Cabrera-Haro, Lilian E.
Lin, Ziyong
Reuter-Lorenz, Patricia A.
Lewis, Richard L.
author_facet Hao, Chenxu
Cabrera-Haro, Lilian E.
Lin, Ziyong
Reuter-Lorenz, Patricia A.
Lewis, Richard L.
author_sort Hao, Chenxu
collection PubMed
description The Value Learning Task (VLT; e.g., Raymond & O’Brien, 2009) is widely used to investigate how acquired value impacts how we perceive and process stimuli. The task consists of a series of trials in which participants attempt to maximize accumulated winnings as they make choices from a pair of presented images associated with probabilistic win, loss, or no-change outcomes. The probabilities and outcomes are initially unknown to the participant and thus the task involves decision making and learning under uncertainty. Despite the symmetric outcome structure for win and loss pairs, people learn win associations better than loss associations (Lin, Cabrera-Haro, & Reuter-Lorenz, 2020). This learning asymmetry could lead to differences when the stimuli are probed in subsequent tasks, compromising inferences about how acquired value affects downstream processing. We investigate the nature of the asymmetry using a standard error-driven reinforcement learning model with a softmax choice rule. Despite having no special role for valence, the model yields the learning asymmetry observed in human behavior, whether the model parameters are set to maximize empirical fit, or task payoff. The asymmetry arises from an interaction between a neutral initial value estimate and a choice policy that exploits while exploring, leading to more poorly discriminated value estimates for loss stimuli. We also show how differences in estimated individual learning rates help to explain individual differences in the observed win-loss asymmetries, and how the final value estimates produced by the model provide a simple account of a post-learning explicit value categorization task. SUPPLEMENTARY INFORMATION: The online version contains supplementary material available at 10.3758/s13415-022-01050-8.
format Online
Article
Text
id pubmed-10390629
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher Springer US
record_format MEDLINE/PubMed
spelling pubmed-103906292023-08-02 Why do valence asymmetries emerge in value learning? A reinforcement learning account Hao, Chenxu Cabrera-Haro, Lilian E. Lin, Ziyong Reuter-Lorenz, Patricia A. Lewis, Richard L. Cogn Affect Behav Neurosci Special Issue/Uncertainty The Value Learning Task (VLT; e.g., Raymond & O’Brien, 2009) is widely used to investigate how acquired value impacts how we perceive and process stimuli. The task consists of a series of trials in which participants attempt to maximize accumulated winnings as they make choices from a pair of presented images associated with probabilistic win, loss, or no-change outcomes. The probabilities and outcomes are initially unknown to the participant and thus the task involves decision making and learning under uncertainty. Despite the symmetric outcome structure for win and loss pairs, people learn win associations better than loss associations (Lin, Cabrera-Haro, & Reuter-Lorenz, 2020). This learning asymmetry could lead to differences when the stimuli are probed in subsequent tasks, compromising inferences about how acquired value affects downstream processing. We investigate the nature of the asymmetry using a standard error-driven reinforcement learning model with a softmax choice rule. Despite having no special role for valence, the model yields the learning asymmetry observed in human behavior, whether the model parameters are set to maximize empirical fit, or task payoff. The asymmetry arises from an interaction between a neutral initial value estimate and a choice policy that exploits while exploring, leading to more poorly discriminated value estimates for loss stimuli. We also show how differences in estimated individual learning rates help to explain individual differences in the observed win-loss asymmetries, and how the final value estimates produced by the model provide a simple account of a post-learning explicit value categorization task. SUPPLEMENTARY INFORMATION: The online version contains supplementary material available at 10.3758/s13415-022-01050-8. Springer US 2022-12-28 2023 /pmc/articles/PMC10390629/ /pubmed/36577934 http://dx.doi.org/10.3758/s13415-022-01050-8 Text en © The Author(s) 2022 https://creativecommons.org/licenses/by/4.0/Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) .
spellingShingle Special Issue/Uncertainty
Hao, Chenxu
Cabrera-Haro, Lilian E.
Lin, Ziyong
Reuter-Lorenz, Patricia A.
Lewis, Richard L.
Why do valence asymmetries emerge in value learning? A reinforcement learning account
title Why do valence asymmetries emerge in value learning? A reinforcement learning account
title_full Why do valence asymmetries emerge in value learning? A reinforcement learning account
title_fullStr Why do valence asymmetries emerge in value learning? A reinforcement learning account
title_full_unstemmed Why do valence asymmetries emerge in value learning? A reinforcement learning account
title_short Why do valence asymmetries emerge in value learning? A reinforcement learning account
title_sort why do valence asymmetries emerge in value learning? a reinforcement learning account
topic Special Issue/Uncertainty
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10390629/
https://www.ncbi.nlm.nih.gov/pubmed/36577934
http://dx.doi.org/10.3758/s13415-022-01050-8
work_keys_str_mv AT haochenxu whydovalenceasymmetriesemergeinvaluelearningareinforcementlearningaccount
AT cabreraharoliliane whydovalenceasymmetriesemergeinvaluelearningareinforcementlearningaccount
AT linziyong whydovalenceasymmetriesemergeinvaluelearningareinforcementlearningaccount
AT reuterlorenzpatriciaa whydovalenceasymmetriesemergeinvaluelearningareinforcementlearningaccount
AT lewisrichardl whydovalenceasymmetriesemergeinvaluelearningareinforcementlearningaccount