Cargando…
Using reinforcement learning models in social neuroscience: frameworks, pitfalls and suggestions of best practices
The recent years have witnessed a dramatic increase in the use of reinforcement learning (RL) models in social, cognitive and affective neuroscience. This approach, in combination with neuroimaging techniques such as functional magnetic resonance imaging, enables quantitative investigations into lat...
Autores principales: | , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Oxford University Press
2020
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7393303/ https://www.ncbi.nlm.nih.gov/pubmed/32608484 http://dx.doi.org/10.1093/scan/nsaa089 |
_version_ | 1783565016332500992 |
---|---|
author | Zhang, Lei Lengersdorff, Lukas Mikus, Nace Gläscher, Jan Lamm, Claus |
author_facet | Zhang, Lei Lengersdorff, Lukas Mikus, Nace Gläscher, Jan Lamm, Claus |
author_sort | Zhang, Lei |
collection | PubMed |
description | The recent years have witnessed a dramatic increase in the use of reinforcement learning (RL) models in social, cognitive and affective neuroscience. This approach, in combination with neuroimaging techniques such as functional magnetic resonance imaging, enables quantitative investigations into latent mechanistic processes. However, increased use of relatively complex computational approaches has led to potential misconceptions and imprecise interpretations. Here, we present a comprehensive framework for the examination of (social) decision-making with the simple Rescorla–Wagner RL model. We discuss common pitfalls in its application and provide practical suggestions. First, with simulation, we unpack the functional role of the learning rate and pinpoint what could easily go wrong when interpreting differences in the learning rate. Then, we discuss the inevitable collinearity between outcome and prediction error in RL models and provide suggestions of how to justify whether the observed neural activation is related to the prediction error rather than outcome valence. Finally, we suggest posterior predictive check is a crucial step after model comparison, and we articulate employing hierarchical modeling for parameter estimation. We aim to provide simple and scalable explanations and practical guidelines for employing RL models to assist both beginners and advanced users in better implementing and interpreting their model-based analyses. |
format | Online Article Text |
id | pubmed-7393303 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2020 |
publisher | Oxford University Press |
record_format | MEDLINE/PubMed |
spelling | pubmed-73933032020-08-04 Using reinforcement learning models in social neuroscience: frameworks, pitfalls and suggestions of best practices Zhang, Lei Lengersdorff, Lukas Mikus, Nace Gläscher, Jan Lamm, Claus Soc Cogn Affect Neurosci Original Manuscript The recent years have witnessed a dramatic increase in the use of reinforcement learning (RL) models in social, cognitive and affective neuroscience. This approach, in combination with neuroimaging techniques such as functional magnetic resonance imaging, enables quantitative investigations into latent mechanistic processes. However, increased use of relatively complex computational approaches has led to potential misconceptions and imprecise interpretations. Here, we present a comprehensive framework for the examination of (social) decision-making with the simple Rescorla–Wagner RL model. We discuss common pitfalls in its application and provide practical suggestions. First, with simulation, we unpack the functional role of the learning rate and pinpoint what could easily go wrong when interpreting differences in the learning rate. Then, we discuss the inevitable collinearity between outcome and prediction error in RL models and provide suggestions of how to justify whether the observed neural activation is related to the prediction error rather than outcome valence. Finally, we suggest posterior predictive check is a crucial step after model comparison, and we articulate employing hierarchical modeling for parameter estimation. We aim to provide simple and scalable explanations and practical guidelines for employing RL models to assist both beginners and advanced users in better implementing and interpreting their model-based analyses. Oxford University Press 2020-06-29 /pmc/articles/PMC7393303/ /pubmed/32608484 http://dx.doi.org/10.1093/scan/nsaa089 Text en © The Author(s) 2020. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com http://creativecommons.org/licenses/by-nc/4.0/ This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/4.0/), which permits non-commercial re-use, distribution, and reproduction in any medium, provided the original work is properly cited. For commercial re-use, please contact journals.permissions@oup.com |
spellingShingle | Original Manuscript Zhang, Lei Lengersdorff, Lukas Mikus, Nace Gläscher, Jan Lamm, Claus Using reinforcement learning models in social neuroscience: frameworks, pitfalls and suggestions of best practices |
title | Using reinforcement learning models in social neuroscience: frameworks, pitfalls and suggestions of best practices |
title_full | Using reinforcement learning models in social neuroscience: frameworks, pitfalls and suggestions of best practices |
title_fullStr | Using reinforcement learning models in social neuroscience: frameworks, pitfalls and suggestions of best practices |
title_full_unstemmed | Using reinforcement learning models in social neuroscience: frameworks, pitfalls and suggestions of best practices |
title_short | Using reinforcement learning models in social neuroscience: frameworks, pitfalls and suggestions of best practices |
title_sort | using reinforcement learning models in social neuroscience: frameworks, pitfalls and suggestions of best practices |
topic | Original Manuscript |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7393303/ https://www.ncbi.nlm.nih.gov/pubmed/32608484 http://dx.doi.org/10.1093/scan/nsaa089 |
work_keys_str_mv | AT zhanglei usingreinforcementlearningmodelsinsocialneuroscienceframeworkspitfallsandsuggestionsofbestpractices AT lengersdorfflukas usingreinforcementlearningmodelsinsocialneuroscienceframeworkspitfallsandsuggestionsofbestpractices AT mikusnace usingreinforcementlearningmodelsinsocialneuroscienceframeworkspitfallsandsuggestionsofbestpractices AT glascherjan usingreinforcementlearningmodelsinsocialneuroscienceframeworkspitfallsandsuggestionsofbestpractices AT lammclaus usingreinforcementlearningmodelsinsocialneuroscienceframeworkspitfallsandsuggestionsofbestpractices |