Cargando…

The functional form of value normalization in human reinforcement learning

Reinforcement learning research in humans and other species indicates that rewards are represented in a context-dependent manner. More specifically, reward representations seem to be normalized as a function of the value of the alternative options. The dominant view postulates that value context-dep...

Descripción completa

Detalles Bibliográficos
Autores principales: Bavard, Sophie, Palminteri, Stefano
Formato: Online Artículo Texto
Lenguaje:English
Publicado: eLife Sciences Publications, Ltd 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10393293/
https://www.ncbi.nlm.nih.gov/pubmed/37428155
http://dx.doi.org/10.7554/eLife.83891
_version_ 1785083136330170368
author Bavard, Sophie
Palminteri, Stefano
author_facet Bavard, Sophie
Palminteri, Stefano
author_sort Bavard, Sophie
collection PubMed
description Reinforcement learning research in humans and other species indicates that rewards are represented in a context-dependent manner. More specifically, reward representations seem to be normalized as a function of the value of the alternative options. The dominant view postulates that value context-dependence is achieved via a divisive normalization rule, inspired by perceptual decision-making research. However, behavioral and neural evidence points to another plausible mechanism: range normalization. Critically, previous experimental designs were ill-suited to disentangle the divisive and the range normalization accounts, which generate similar behavioral predictions in many circumstances. To address this question, we designed a new learning task where we manipulated, across learning contexts, the number of options and the value ranges. Behavioral and computational analyses falsify the divisive normalization account and rather provide support for the range normalization rule. Together, these results shed new light on the computational mechanisms underlying context-dependence in learning and decision-making.
format Online
Article
Text
id pubmed-10393293
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher eLife Sciences Publications, Ltd
record_format MEDLINE/PubMed
spelling pubmed-103932932023-08-02 The functional form of value normalization in human reinforcement learning Bavard, Sophie Palminteri, Stefano eLife Computational and Systems Biology Reinforcement learning research in humans and other species indicates that rewards are represented in a context-dependent manner. More specifically, reward representations seem to be normalized as a function of the value of the alternative options. The dominant view postulates that value context-dependence is achieved via a divisive normalization rule, inspired by perceptual decision-making research. However, behavioral and neural evidence points to another plausible mechanism: range normalization. Critically, previous experimental designs were ill-suited to disentangle the divisive and the range normalization accounts, which generate similar behavioral predictions in many circumstances. To address this question, we designed a new learning task where we manipulated, across learning contexts, the number of options and the value ranges. Behavioral and computational analyses falsify the divisive normalization account and rather provide support for the range normalization rule. Together, these results shed new light on the computational mechanisms underlying context-dependence in learning and decision-making. eLife Sciences Publications, Ltd 2023-07-10 /pmc/articles/PMC10393293/ /pubmed/37428155 http://dx.doi.org/10.7554/eLife.83891 Text en © 2023, Bavard and Palminteri https://creativecommons.org/licenses/by/4.0/This article is distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/) , which permits unrestricted use and redistribution provided that the original author and source are credited.
spellingShingle Computational and Systems Biology
Bavard, Sophie
Palminteri, Stefano
The functional form of value normalization in human reinforcement learning
title The functional form of value normalization in human reinforcement learning
title_full The functional form of value normalization in human reinforcement learning
title_fullStr The functional form of value normalization in human reinforcement learning
title_full_unstemmed The functional form of value normalization in human reinforcement learning
title_short The functional form of value normalization in human reinforcement learning
title_sort functional form of value normalization in human reinforcement learning
topic Computational and Systems Biology
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10393293/
https://www.ncbi.nlm.nih.gov/pubmed/37428155
http://dx.doi.org/10.7554/eLife.83891
work_keys_str_mv AT bavardsophie thefunctionalformofvaluenormalizationinhumanreinforcementlearning
AT palminteristefano thefunctionalformofvaluenormalizationinhumanreinforcementlearning
AT bavardsophie functionalformofvaluenormalizationinhumanreinforcementlearning
AT palminteristefano functionalformofvaluenormalizationinhumanreinforcementlearning