Cargando…
Model-Free RL or Action Sequences?
The alignment of habits with model-free reinforcement learning (MF RL) is a success story for computational models of decision making, and MF RL has been applied to explain phasic dopamine responses (Schultz et al., 1997), working memory gating (O'Reilly and Frank, 2006), drug addiction (Redish...
Autores principales: | , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Frontiers Media S.A.
2019
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6933525/ https://www.ncbi.nlm.nih.gov/pubmed/31920900 http://dx.doi.org/10.3389/fpsyg.2019.02892 |
_version_ | 1783483228876701696 |
---|---|
author | Morris, Adam Cushman, Fiery |
author_facet | Morris, Adam Cushman, Fiery |
author_sort | Morris, Adam |
collection | PubMed |
description | The alignment of habits with model-free reinforcement learning (MF RL) is a success story for computational models of decision making, and MF RL has been applied to explain phasic dopamine responses (Schultz et al., 1997), working memory gating (O'Reilly and Frank, 2006), drug addiction (Redish, 2004), moral intuitions (Crockett, 2013; Cushman, 2013), and more. Yet, the role of MF RL has recently been challenged by an alternate model—model-based selection of chained action sequences—that produces similar behavioral and neural patterns. Here, we present two experiments that dissociate MF RL from this prominent alternative, and present unconfounded empirical support for the role of MF RL in human decision making. Our results also demonstrate that people are simultaneously using model-based selection of action sequences, thus demonstrating two distinct mechanisms of habitual control in a common experimental paradigm. These findings clarify the nature of habits and help solidify MF RL's central position in models of human behavior. |
format | Online Article Text |
id | pubmed-6933525 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2019 |
publisher | Frontiers Media S.A. |
record_format | MEDLINE/PubMed |
spelling | pubmed-69335252020-01-09 Model-Free RL or Action Sequences? Morris, Adam Cushman, Fiery Front Psychol Psychology The alignment of habits with model-free reinforcement learning (MF RL) is a success story for computational models of decision making, and MF RL has been applied to explain phasic dopamine responses (Schultz et al., 1997), working memory gating (O'Reilly and Frank, 2006), drug addiction (Redish, 2004), moral intuitions (Crockett, 2013; Cushman, 2013), and more. Yet, the role of MF RL has recently been challenged by an alternate model—model-based selection of chained action sequences—that produces similar behavioral and neural patterns. Here, we present two experiments that dissociate MF RL from this prominent alternative, and present unconfounded empirical support for the role of MF RL in human decision making. Our results also demonstrate that people are simultaneously using model-based selection of action sequences, thus demonstrating two distinct mechanisms of habitual control in a common experimental paradigm. These findings clarify the nature of habits and help solidify MF RL's central position in models of human behavior. Frontiers Media S.A. 2019-12-20 /pmc/articles/PMC6933525/ /pubmed/31920900 http://dx.doi.org/10.3389/fpsyg.2019.02892 Text en Copyright © 2019 Morris and Cushman. http://creativecommons.org/licenses/by/4.0/ This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms. |
spellingShingle | Psychology Morris, Adam Cushman, Fiery Model-Free RL or Action Sequences? |
title | Model-Free RL or Action Sequences? |
title_full | Model-Free RL or Action Sequences? |
title_fullStr | Model-Free RL or Action Sequences? |
title_full_unstemmed | Model-Free RL or Action Sequences? |
title_short | Model-Free RL or Action Sequences? |
title_sort | model-free rl or action sequences? |
topic | Psychology |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6933525/ https://www.ncbi.nlm.nih.gov/pubmed/31920900 http://dx.doi.org/10.3389/fpsyg.2019.02892 |
work_keys_str_mv | AT morrisadam modelfreerloractionsequences AT cushmanfiery modelfreerloractionsequences |