Cargando…

Computational modeling of choice-induced preference change: A Reinforcement-Learning-based approach

The value learning process has been investigated using decision-making tasks with a correct answer specified by the external environment (externally guided decision-making, EDM). In EDM, people are required to adjust their choices based on feedback, and the learning process is generally explained by...

Descripción completa

Detalles Bibliográficos
Autores principales: Zhu, Jianhong, Hashimoto, Junya, Katahira, Kentaro, Hirakawa, Makoto, Nakao, Takashi
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Public Library of Science 2021
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7790366/
https://www.ncbi.nlm.nih.gov/pubmed/33411720
http://dx.doi.org/10.1371/journal.pone.0244434
_version_ 1783633408902037504
author Zhu, Jianhong
Hashimoto, Junya
Katahira, Kentaro
Hirakawa, Makoto
Nakao, Takashi
author_facet Zhu, Jianhong
Hashimoto, Junya
Katahira, Kentaro
Hirakawa, Makoto
Nakao, Takashi
author_sort Zhu, Jianhong
collection PubMed
description The value learning process has been investigated using decision-making tasks with a correct answer specified by the external environment (externally guided decision-making, EDM). In EDM, people are required to adjust their choices based on feedback, and the learning process is generally explained by the reinforcement learning (RL) model. In addition to EDM, value is learned through internally guided decision-making (IDM), in which no correct answer defined by external circumstances is available, such as preference judgment. In IDM, it has been believed that the value of the chosen item is increased and that of the rejected item is decreased (choice-induced preference change; CIPC). An RL-based model called the choice-based learning (CBL) model had been proposed to describe CIPC, in which the values of chosen and/or rejected items are updated as if own choice were the correct answer. However, the validity of the CBL model has not been confirmed by fitting the model to IDM behavioral data. The present study aims to examine the CBL model in IDM. We conducted simulations, a preference judgment task for novel contour shapes, and applied computational model analyses to the behavioral data. The results showed that the CBL model with both the chosen and rejected value’s updated were a good fit for the IDM behavioral data compared to the other candidate models. Although previous studies using subjective preference ratings had repeatedly reported changes only in one of the values of either the chosen or rejected items, we demonstrated for the first time both items’ value changes were based solely on IDM choice behavioral data with computational model analyses.
format Online
Article
Text
id pubmed-7790366
institution National Center for Biotechnology Information
language English
publishDate 2021
publisher Public Library of Science
record_format MEDLINE/PubMed
spelling pubmed-77903662021-01-27 Computational modeling of choice-induced preference change: A Reinforcement-Learning-based approach Zhu, Jianhong Hashimoto, Junya Katahira, Kentaro Hirakawa, Makoto Nakao, Takashi PLoS One Research Article The value learning process has been investigated using decision-making tasks with a correct answer specified by the external environment (externally guided decision-making, EDM). In EDM, people are required to adjust their choices based on feedback, and the learning process is generally explained by the reinforcement learning (RL) model. In addition to EDM, value is learned through internally guided decision-making (IDM), in which no correct answer defined by external circumstances is available, such as preference judgment. In IDM, it has been believed that the value of the chosen item is increased and that of the rejected item is decreased (choice-induced preference change; CIPC). An RL-based model called the choice-based learning (CBL) model had been proposed to describe CIPC, in which the values of chosen and/or rejected items are updated as if own choice were the correct answer. However, the validity of the CBL model has not been confirmed by fitting the model to IDM behavioral data. The present study aims to examine the CBL model in IDM. We conducted simulations, a preference judgment task for novel contour shapes, and applied computational model analyses to the behavioral data. The results showed that the CBL model with both the chosen and rejected value’s updated were a good fit for the IDM behavioral data compared to the other candidate models. Although previous studies using subjective preference ratings had repeatedly reported changes only in one of the values of either the chosen or rejected items, we demonstrated for the first time both items’ value changes were based solely on IDM choice behavioral data with computational model analyses. Public Library of Science 2021-01-07 /pmc/articles/PMC7790366/ /pubmed/33411720 http://dx.doi.org/10.1371/journal.pone.0244434 Text en © 2021 Zhu et al http://creativecommons.org/licenses/by/4.0/ This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0/) , which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
spellingShingle Research Article
Zhu, Jianhong
Hashimoto, Junya
Katahira, Kentaro
Hirakawa, Makoto
Nakao, Takashi
Computational modeling of choice-induced preference change: A Reinforcement-Learning-based approach
title Computational modeling of choice-induced preference change: A Reinforcement-Learning-based approach
title_full Computational modeling of choice-induced preference change: A Reinforcement-Learning-based approach
title_fullStr Computational modeling of choice-induced preference change: A Reinforcement-Learning-based approach
title_full_unstemmed Computational modeling of choice-induced preference change: A Reinforcement-Learning-based approach
title_short Computational modeling of choice-induced preference change: A Reinforcement-Learning-based approach
title_sort computational modeling of choice-induced preference change: a reinforcement-learning-based approach
topic Research Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7790366/
https://www.ncbi.nlm.nih.gov/pubmed/33411720
http://dx.doi.org/10.1371/journal.pone.0244434
work_keys_str_mv AT zhujianhong computationalmodelingofchoiceinducedpreferencechangeareinforcementlearningbasedapproach
AT hashimotojunya computationalmodelingofchoiceinducedpreferencechangeareinforcementlearningbasedapproach
AT katahirakentaro computationalmodelingofchoiceinducedpreferencechangeareinforcementlearningbasedapproach
AT hirakawamakoto computationalmodelingofchoiceinducedpreferencechangeareinforcementlearningbasedapproach
AT nakaotakashi computationalmodelingofchoiceinducedpreferencechangeareinforcementlearningbasedapproach