Cargando…

Cooperative update of beliefs and state-transition functions in human reinforcement learning

It is widely known that reinforcement learning systems in the brain contribute to learning via interactions with the environment. These systems are capable of solving multidimensional problems, in which some dimensions are relevant to a reward, while others are not. To solve these problems, computat...

Descripción completa

Detalles Bibliográficos
Autores principales: Higashi, Hiroshi, Minami, Tetsuto, Nakauchi, Shigeki
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Nature Publishing Group UK 2019
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6881319/
https://www.ncbi.nlm.nih.gov/pubmed/31776353
http://dx.doi.org/10.1038/s41598-019-53600-9
_version_ 1783473921054474240
author Higashi, Hiroshi
Minami, Tetsuto
Nakauchi, Shigeki
author_facet Higashi, Hiroshi
Minami, Tetsuto
Nakauchi, Shigeki
author_sort Higashi, Hiroshi
collection PubMed
description It is widely known that reinforcement learning systems in the brain contribute to learning via interactions with the environment. These systems are capable of solving multidimensional problems, in which some dimensions are relevant to a reward, while others are not. To solve these problems, computational models use Bayesian learning, a strategy supported by behavioral and neural evidence in human. Bayesian learning takes into account beliefs, which represent a learner’s confidence in a particular dimension being relevant to the reward. Beliefs are given as a posterior probability of the state-transition (reward) function that maps the optimal actions to the states in each dimension. However, when it comes to implementing this learning strategy, the order in which beliefs and state-transition functions update remains unclear. The present study investigates this update order using a trial-by-trial analysis of human behavior and electroencephalography signals during a task in which learners have to identify the reward-relevant dimension. Our behavioral and neural results reveal a cooperative update—within 300 ms after the outcome feedback, the state-transition functions are updated, followed by the beliefs for each dimension.
format Online
Article
Text
id pubmed-6881319
institution National Center for Biotechnology Information
language English
publishDate 2019
publisher Nature Publishing Group UK
record_format MEDLINE/PubMed
spelling pubmed-68813192019-12-05 Cooperative update of beliefs and state-transition functions in human reinforcement learning Higashi, Hiroshi Minami, Tetsuto Nakauchi, Shigeki Sci Rep Article It is widely known that reinforcement learning systems in the brain contribute to learning via interactions with the environment. These systems are capable of solving multidimensional problems, in which some dimensions are relevant to a reward, while others are not. To solve these problems, computational models use Bayesian learning, a strategy supported by behavioral and neural evidence in human. Bayesian learning takes into account beliefs, which represent a learner’s confidence in a particular dimension being relevant to the reward. Beliefs are given as a posterior probability of the state-transition (reward) function that maps the optimal actions to the states in each dimension. However, when it comes to implementing this learning strategy, the order in which beliefs and state-transition functions update remains unclear. The present study investigates this update order using a trial-by-trial analysis of human behavior and electroencephalography signals during a task in which learners have to identify the reward-relevant dimension. Our behavioral and neural results reveal a cooperative update—within 300 ms after the outcome feedback, the state-transition functions are updated, followed by the beliefs for each dimension. Nature Publishing Group UK 2019-11-27 /pmc/articles/PMC6881319/ /pubmed/31776353 http://dx.doi.org/10.1038/s41598-019-53600-9 Text en © The Author(s) 2019 Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/.
spellingShingle Article
Higashi, Hiroshi
Minami, Tetsuto
Nakauchi, Shigeki
Cooperative update of beliefs and state-transition functions in human reinforcement learning
title Cooperative update of beliefs and state-transition functions in human reinforcement learning
title_full Cooperative update of beliefs and state-transition functions in human reinforcement learning
title_fullStr Cooperative update of beliefs and state-transition functions in human reinforcement learning
title_full_unstemmed Cooperative update of beliefs and state-transition functions in human reinforcement learning
title_short Cooperative update of beliefs and state-transition functions in human reinforcement learning
title_sort cooperative update of beliefs and state-transition functions in human reinforcement learning
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6881319/
https://www.ncbi.nlm.nih.gov/pubmed/31776353
http://dx.doi.org/10.1038/s41598-019-53600-9
work_keys_str_mv AT higashihiroshi cooperativeupdateofbeliefsandstatetransitionfunctionsinhumanreinforcementlearning
AT minamitetsuto cooperativeupdateofbeliefsandstatetransitionfunctionsinhumanreinforcementlearning
AT nakauchishigeki cooperativeupdateofbeliefsandstatetransitionfunctionsinhumanreinforcementlearning