Cargando…

Intrinsic rewards explain context-sensitive valuation in reinforcement learning

When observing the outcome of a choice, people are sensitive to the choice’s context, such that the experienced value of an option depends on the alternatives: getting $1 when the possibilities were 0 or 1 feels much better than when the possibilities were 1 or 10. Context-sensitive valuation has be...

Descripción completa

Detalles Bibliográficos
Autores principales: Molinaro, Gaia, Collins, Anne G. E.
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Public Library of Science 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10374061/
https://www.ncbi.nlm.nih.gov/pubmed/37459394
http://dx.doi.org/10.1371/journal.pbio.3002201
_version_ 1785078693923651584
author Molinaro, Gaia
Collins, Anne G. E.
author_facet Molinaro, Gaia
Collins, Anne G. E.
author_sort Molinaro, Gaia
collection PubMed
description When observing the outcome of a choice, people are sensitive to the choice’s context, such that the experienced value of an option depends on the alternatives: getting $1 when the possibilities were 0 or 1 feels much better than when the possibilities were 1 or 10. Context-sensitive valuation has been documented within reinforcement learning (RL) tasks, in which values are learned from experience through trial and error. Range adaptation, wherein options are rescaled according to the range of values yielded by available options, has been proposed to account for this phenomenon. However, we propose that other mechanisms—reflecting a different theoretical viewpoint—may also explain this phenomenon. Specifically, we theorize that internally defined goals play a crucial role in shaping the subjective value attributed to any given option. Motivated by this theory, we develop a new “intrinsically enhanced” RL model, which combines extrinsically provided rewards with internally generated signals of goal achievement as a teaching signal. Across 7 different studies (including previously published data sets as well as a novel, preregistered experiment with replication and control studies), we show that the intrinsically enhanced model can explain context-sensitive valuation as well as, or better than, range adaptation. Our findings indicate a more prominent role of intrinsic, goal-dependent rewards than previously recognized within formal models of human RL. By integrating internally generated signals of reward, standard RL theories should better account for human behavior, including context-sensitive valuation and beyond.
format Online
Article
Text
id pubmed-10374061
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher Public Library of Science
record_format MEDLINE/PubMed
spelling pubmed-103740612023-07-28 Intrinsic rewards explain context-sensitive valuation in reinforcement learning Molinaro, Gaia Collins, Anne G. E. PLoS Biol Research Article When observing the outcome of a choice, people are sensitive to the choice’s context, such that the experienced value of an option depends on the alternatives: getting $1 when the possibilities were 0 or 1 feels much better than when the possibilities were 1 or 10. Context-sensitive valuation has been documented within reinforcement learning (RL) tasks, in which values are learned from experience through trial and error. Range adaptation, wherein options are rescaled according to the range of values yielded by available options, has been proposed to account for this phenomenon. However, we propose that other mechanisms—reflecting a different theoretical viewpoint—may also explain this phenomenon. Specifically, we theorize that internally defined goals play a crucial role in shaping the subjective value attributed to any given option. Motivated by this theory, we develop a new “intrinsically enhanced” RL model, which combines extrinsically provided rewards with internally generated signals of goal achievement as a teaching signal. Across 7 different studies (including previously published data sets as well as a novel, preregistered experiment with replication and control studies), we show that the intrinsically enhanced model can explain context-sensitive valuation as well as, or better than, range adaptation. Our findings indicate a more prominent role of intrinsic, goal-dependent rewards than previously recognized within formal models of human RL. By integrating internally generated signals of reward, standard RL theories should better account for human behavior, including context-sensitive valuation and beyond. Public Library of Science 2023-07-17 /pmc/articles/PMC10374061/ /pubmed/37459394 http://dx.doi.org/10.1371/journal.pbio.3002201 Text en © 2023 Molinaro, Collins https://creativecommons.org/licenses/by/4.0/This is an open access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/) , which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
spellingShingle Research Article
Molinaro, Gaia
Collins, Anne G. E.
Intrinsic rewards explain context-sensitive valuation in reinforcement learning
title Intrinsic rewards explain context-sensitive valuation in reinforcement learning
title_full Intrinsic rewards explain context-sensitive valuation in reinforcement learning
title_fullStr Intrinsic rewards explain context-sensitive valuation in reinforcement learning
title_full_unstemmed Intrinsic rewards explain context-sensitive valuation in reinforcement learning
title_short Intrinsic rewards explain context-sensitive valuation in reinforcement learning
title_sort intrinsic rewards explain context-sensitive valuation in reinforcement learning
topic Research Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10374061/
https://www.ncbi.nlm.nih.gov/pubmed/37459394
http://dx.doi.org/10.1371/journal.pbio.3002201
work_keys_str_mv AT molinarogaia intrinsicrewardsexplaincontextsensitivevaluationinreinforcementlearning
AT collinsannege intrinsicrewardsexplaincontextsensitivevaluationinreinforcementlearning