Cargando…

Decomposing the effects of context valence and feedback information on speed and accuracy during reinforcement learning: a meta-analytical approach using diffusion decision modeling

Reinforcement learning (RL) models describe how humans and animals learn by trial-and-error to select actions that maximize rewards and minimize punishments. Traditional RL models focus exclusively on choices, thereby ignoring the interactions between choice preference and response time (RT), or how...

Descripción completa

Detalles Bibliográficos
Autores principales: Fontanesi, Laura, Palminteri, Stefano, Lebreton, Maël
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Springer US 2019
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6598978/
https://www.ncbi.nlm.nih.gov/pubmed/31175616
http://dx.doi.org/10.3758/s13415-019-00723-1
_version_ 1783430864884989952
author Fontanesi, Laura
Palminteri, Stefano
Lebreton, Maël
author_facet Fontanesi, Laura
Palminteri, Stefano
Lebreton, Maël
author_sort Fontanesi, Laura
collection PubMed
description Reinforcement learning (RL) models describe how humans and animals learn by trial-and-error to select actions that maximize rewards and minimize punishments. Traditional RL models focus exclusively on choices, thereby ignoring the interactions between choice preference and response time (RT), or how these interactions are influenced by contextual factors. However, in the field of perceptual decision-making, such interactions have proven to be important to dissociate between different underlying cognitive processes. Here, we investigated such interactions to shed new light on overlooked differences between learning to seek rewards and learning to avoid losses. We leveraged behavioral data from four RL experiments, which feature manipulations of two factors: outcome valence (gains vs. losses) and feedback information (partial vs. complete feedback). A Bayesian meta-analysis revealed that these contextual factors differently affect RTs and accuracy: While valence only affects RTs, feedback information affects both RTs and accuracy. To dissociate between the latent cognitive processes, we jointly fitted choices and RTs across all experiments with a Bayesian, hierarchical diffusion decision model (DDM). We found that the feedback manipulation affected drift rate, threshold, and non-decision time, suggesting that it was not a mere difficulty effect. Moreover, valence affected non-decision time and threshold, suggesting a motor inhibition in punishing contexts. To better understand the learning dynamics, we finally fitted a combination of RL and DDM (RLDDM). We found that while the threshold was modulated by trial-specific decision conflict, the non-decision time was modulated by the learned context valence. Overall, our results illustrate the benefits of jointly modeling RTs and choice data during RL, to reveal subtle mechanistic differences underlying decisions in different learning contexts. ELECTRONIC SUPPLEMENTARY MATERIAL: The online version of this article (10.3758/s13415-019-00723-1) contains supplementary material, which is available to authorized users.
format Online
Article
Text
id pubmed-6598978
institution National Center for Biotechnology Information
language English
publishDate 2019
publisher Springer US
record_format MEDLINE/PubMed
spelling pubmed-65989782019-07-19 Decomposing the effects of context valence and feedback information on speed and accuracy during reinforcement learning: a meta-analytical approach using diffusion decision modeling Fontanesi, Laura Palminteri, Stefano Lebreton, Maël Cogn Affect Behav Neurosci Special Issue/Reward Systems, Cognition, and Emotion Reinforcement learning (RL) models describe how humans and animals learn by trial-and-error to select actions that maximize rewards and minimize punishments. Traditional RL models focus exclusively on choices, thereby ignoring the interactions between choice preference and response time (RT), or how these interactions are influenced by contextual factors. However, in the field of perceptual decision-making, such interactions have proven to be important to dissociate between different underlying cognitive processes. Here, we investigated such interactions to shed new light on overlooked differences between learning to seek rewards and learning to avoid losses. We leveraged behavioral data from four RL experiments, which feature manipulations of two factors: outcome valence (gains vs. losses) and feedback information (partial vs. complete feedback). A Bayesian meta-analysis revealed that these contextual factors differently affect RTs and accuracy: While valence only affects RTs, feedback information affects both RTs and accuracy. To dissociate between the latent cognitive processes, we jointly fitted choices and RTs across all experiments with a Bayesian, hierarchical diffusion decision model (DDM). We found that the feedback manipulation affected drift rate, threshold, and non-decision time, suggesting that it was not a mere difficulty effect. Moreover, valence affected non-decision time and threshold, suggesting a motor inhibition in punishing contexts. To better understand the learning dynamics, we finally fitted a combination of RL and DDM (RLDDM). We found that while the threshold was modulated by trial-specific decision conflict, the non-decision time was modulated by the learned context valence. Overall, our results illustrate the benefits of jointly modeling RTs and choice data during RL, to reveal subtle mechanistic differences underlying decisions in different learning contexts. ELECTRONIC SUPPLEMENTARY MATERIAL: The online version of this article (10.3758/s13415-019-00723-1) contains supplementary material, which is available to authorized users. Springer US 2019-06-07 2019 /pmc/articles/PMC6598978/ /pubmed/31175616 http://dx.doi.org/10.3758/s13415-019-00723-1 Text en © The Author(s) 2019 Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
spellingShingle Special Issue/Reward Systems, Cognition, and Emotion
Fontanesi, Laura
Palminteri, Stefano
Lebreton, Maël
Decomposing the effects of context valence and feedback information on speed and accuracy during reinforcement learning: a meta-analytical approach using diffusion decision modeling
title Decomposing the effects of context valence and feedback information on speed and accuracy during reinforcement learning: a meta-analytical approach using diffusion decision modeling
title_full Decomposing the effects of context valence and feedback information on speed and accuracy during reinforcement learning: a meta-analytical approach using diffusion decision modeling
title_fullStr Decomposing the effects of context valence and feedback information on speed and accuracy during reinforcement learning: a meta-analytical approach using diffusion decision modeling
title_full_unstemmed Decomposing the effects of context valence and feedback information on speed and accuracy during reinforcement learning: a meta-analytical approach using diffusion decision modeling
title_short Decomposing the effects of context valence and feedback information on speed and accuracy during reinforcement learning: a meta-analytical approach using diffusion decision modeling
title_sort decomposing the effects of context valence and feedback information on speed and accuracy during reinforcement learning: a meta-analytical approach using diffusion decision modeling
topic Special Issue/Reward Systems, Cognition, and Emotion
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6598978/
https://www.ncbi.nlm.nih.gov/pubmed/31175616
http://dx.doi.org/10.3758/s13415-019-00723-1
work_keys_str_mv AT fontanesilaura decomposingtheeffectsofcontextvalenceandfeedbackinformationonspeedandaccuracyduringreinforcementlearningametaanalyticalapproachusingdiffusiondecisionmodeling
AT palminteristefano decomposingtheeffectsofcontextvalenceandfeedbackinformationonspeedandaccuracyduringreinforcementlearningametaanalyticalapproachusingdiffusiondecisionmodeling
AT lebretonmael decomposingtheeffectsofcontextvalenceandfeedbackinformationonspeedandaccuracyduringreinforcementlearningametaanalyticalapproachusingdiffusiondecisionmodeling