Cargando…
Discounting of reward sequences: a test of competing formal models of hyperbolic discounting
Humans are known to discount future rewards hyperbolically in time. Nevertheless, a formal recursive model of hyperbolic discounting has been elusive until recently, with the introduction of the hyperbolically discounted temporal difference (HDTD) model. Prior to that, models of learning (especially...
Autores principales: | , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Frontiers Media S.A.
2014
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3944395/ https://www.ncbi.nlm.nih.gov/pubmed/24639662 http://dx.doi.org/10.3389/fpsyg.2014.00178 |
_version_ | 1782306374242271232 |
---|---|
author | Zarr, Noah Alexander, William H. Brown, Joshua W. |
author_facet | Zarr, Noah Alexander, William H. Brown, Joshua W. |
author_sort | Zarr, Noah |
collection | PubMed |
description | Humans are known to discount future rewards hyperbolically in time. Nevertheless, a formal recursive model of hyperbolic discounting has been elusive until recently, with the introduction of the hyperbolically discounted temporal difference (HDTD) model. Prior to that, models of learning (especially reinforcement learning) have relied on exponential discounting, which generally provides poorer fits to behavioral data. Recently, it has been shown that hyperbolic discounting can also be approximated by a summed distribution of exponentially discounted values, instantiated in the μAgents model. The HDTD model and the μAgents model differ in one key respect, namely how they treat sequences of rewards. The μAgents model is a particular implementation of a Parallel discounting model, which values sequences based on the summed value of the individual rewards whereas the HDTD model contains a non-linear interaction. To discriminate among these models, we observed how subjects discounted a sequence of three rewards, and then we tested how well each candidate model fit the subject data. The results show that the Parallel model generally provides a better fit to the human data. |
format | Online Article Text |
id | pubmed-3944395 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2014 |
publisher | Frontiers Media S.A. |
record_format | MEDLINE/PubMed |
spelling | pubmed-39443952014-03-17 Discounting of reward sequences: a test of competing formal models of hyperbolic discounting Zarr, Noah Alexander, William H. Brown, Joshua W. Front Psychol Psychology Humans are known to discount future rewards hyperbolically in time. Nevertheless, a formal recursive model of hyperbolic discounting has been elusive until recently, with the introduction of the hyperbolically discounted temporal difference (HDTD) model. Prior to that, models of learning (especially reinforcement learning) have relied on exponential discounting, which generally provides poorer fits to behavioral data. Recently, it has been shown that hyperbolic discounting can also be approximated by a summed distribution of exponentially discounted values, instantiated in the μAgents model. The HDTD model and the μAgents model differ in one key respect, namely how they treat sequences of rewards. The μAgents model is a particular implementation of a Parallel discounting model, which values sequences based on the summed value of the individual rewards whereas the HDTD model contains a non-linear interaction. To discriminate among these models, we observed how subjects discounted a sequence of three rewards, and then we tested how well each candidate model fit the subject data. The results show that the Parallel model generally provides a better fit to the human data. Frontiers Media S.A. 2014-03-06 /pmc/articles/PMC3944395/ /pubmed/24639662 http://dx.doi.org/10.3389/fpsyg.2014.00178 Text en Copyright © 2014 Zarr, Alexander and Brown. http://creativecommons.org/licenses/by/3.0/ This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms. |
spellingShingle | Psychology Zarr, Noah Alexander, William H. Brown, Joshua W. Discounting of reward sequences: a test of competing formal models of hyperbolic discounting |
title | Discounting of reward sequences: a test of competing formal models of hyperbolic discounting |
title_full | Discounting of reward sequences: a test of competing formal models of hyperbolic discounting |
title_fullStr | Discounting of reward sequences: a test of competing formal models of hyperbolic discounting |
title_full_unstemmed | Discounting of reward sequences: a test of competing formal models of hyperbolic discounting |
title_short | Discounting of reward sequences: a test of competing formal models of hyperbolic discounting |
title_sort | discounting of reward sequences: a test of competing formal models of hyperbolic discounting |
topic | Psychology |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3944395/ https://www.ncbi.nlm.nih.gov/pubmed/24639662 http://dx.doi.org/10.3389/fpsyg.2014.00178 |
work_keys_str_mv | AT zarrnoah discountingofrewardsequencesatestofcompetingformalmodelsofhyperbolicdiscounting AT alexanderwilliamh discountingofrewardsequencesatestofcompetingformalmodelsofhyperbolicdiscounting AT brownjoshuaw discountingofrewardsequencesatestofcompetingformalmodelsofhyperbolicdiscounting |