Cargando…

Revealing Neurocomputational Mechanisms of Reinforcement Learning and Decision-Making With the hBayesDM Package

Reinforcement learning and decision-making (RLDM) provide a quantitative framework and computational theories with which we can disentangle psychiatric conditions into the basic dimensions of neurocognitive functioning. RLDM offer a novel approach to assessing and potentially diagnosing psychiatric...

Descripción completa

Detalles Bibliográficos
Autores principales: Ahn, Woo-Young, Haines, Nathaniel, Zhang, Lei
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MIT Press 2017
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5869013/
https://www.ncbi.nlm.nih.gov/pubmed/29601060
http://dx.doi.org/10.1162/CPSY_a_00002
_version_ 1783309224755855360
author Ahn, Woo-Young
Haines, Nathaniel
Zhang, Lei
author_facet Ahn, Woo-Young
Haines, Nathaniel
Zhang, Lei
author_sort Ahn, Woo-Young
collection PubMed
description Reinforcement learning and decision-making (RLDM) provide a quantitative framework and computational theories with which we can disentangle psychiatric conditions into the basic dimensions of neurocognitive functioning. RLDM offer a novel approach to assessing and potentially diagnosing psychiatric patients, and there is growing enthusiasm for both RLDM and computational psychiatry among clinical researchers. Such a framework can also provide insights into the brain substrates of particular RLDM processes, as exemplified by model-based analysis of data from functional magnetic resonance imaging (fMRI) or electroencephalography (EEG). However, researchers often find the approach too technical and have difficulty adopting it for their research. Thus, a critical need remains to develop a user-friendly tool for the wide dissemination of computational psychiatric methods. We introduce an R package called hBayesDM (hierarchical Bayesian modeling of Decision-Making tasks), which offers computational modeling of an array of RLDM tasks and social exchange games. The hBayesDM package offers state-of-the-art hierarchical Bayesian modeling, in which both individual and group parameters (i.e., posterior distributions) are estimated simultaneously in a mutually constraining fashion. At the same time, the package is extremely user-friendly: users can perform computational modeling, output visualization, and Bayesian model comparisons, each with a single line of coding. Users can also extract the trial-by-trial latent variables (e.g., prediction errors) required for model-based fMRI/EEG. With the hBayesDM package, we anticipate that anyone with minimal knowledge of programming can take advantage of cutting-edge computational-modeling approaches to investigate the underlying processes of and interactions between multiple decision-making (e.g., goal-directed, habitual, and Pavlovian) systems. In this way, we expect that the hBayesDM package will contribute to the dissemination of advanced modeling approaches and enable a wide range of researchers to easily perform computational psychiatric research within different populations.
format Online
Article
Text
id pubmed-5869013
institution National Center for Biotechnology Information
language English
publishDate 2017
publisher MIT Press
record_format MEDLINE/PubMed
spelling pubmed-58690132018-03-27 Revealing Neurocomputational Mechanisms of Reinforcement Learning and Decision-Making With the hBayesDM Package Ahn, Woo-Young Haines, Nathaniel Zhang, Lei Comput Psychiatr Research Reinforcement learning and decision-making (RLDM) provide a quantitative framework and computational theories with which we can disentangle psychiatric conditions into the basic dimensions of neurocognitive functioning. RLDM offer a novel approach to assessing and potentially diagnosing psychiatric patients, and there is growing enthusiasm for both RLDM and computational psychiatry among clinical researchers. Such a framework can also provide insights into the brain substrates of particular RLDM processes, as exemplified by model-based analysis of data from functional magnetic resonance imaging (fMRI) or electroencephalography (EEG). However, researchers often find the approach too technical and have difficulty adopting it for their research. Thus, a critical need remains to develop a user-friendly tool for the wide dissemination of computational psychiatric methods. We introduce an R package called hBayesDM (hierarchical Bayesian modeling of Decision-Making tasks), which offers computational modeling of an array of RLDM tasks and social exchange games. The hBayesDM package offers state-of-the-art hierarchical Bayesian modeling, in which both individual and group parameters (i.e., posterior distributions) are estimated simultaneously in a mutually constraining fashion. At the same time, the package is extremely user-friendly: users can perform computational modeling, output visualization, and Bayesian model comparisons, each with a single line of coding. Users can also extract the trial-by-trial latent variables (e.g., prediction errors) required for model-based fMRI/EEG. With the hBayesDM package, we anticipate that anyone with minimal knowledge of programming can take advantage of cutting-edge computational-modeling approaches to investigate the underlying processes of and interactions between multiple decision-making (e.g., goal-directed, habitual, and Pavlovian) systems. In this way, we expect that the hBayesDM package will contribute to the dissemination of advanced modeling approaches and enable a wide range of researchers to easily perform computational psychiatric research within different populations. MIT Press 2017-10-01 /pmc/articles/PMC5869013/ /pubmed/29601060 http://dx.doi.org/10.1162/CPSY_a_00002 Text en © 2017 Massachusetts Institute of Technology Published under a Creative Commons Attribution 4.0 International (CC BY 4.0) license http://creativecommons.org/licenses/by/3.0/ This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
spellingShingle Research
Ahn, Woo-Young
Haines, Nathaniel
Zhang, Lei
Revealing Neurocomputational Mechanisms of Reinforcement Learning and Decision-Making With the hBayesDM Package
title Revealing Neurocomputational Mechanisms of Reinforcement Learning and Decision-Making With the hBayesDM Package
title_full Revealing Neurocomputational Mechanisms of Reinforcement Learning and Decision-Making With the hBayesDM Package
title_fullStr Revealing Neurocomputational Mechanisms of Reinforcement Learning and Decision-Making With the hBayesDM Package
title_full_unstemmed Revealing Neurocomputational Mechanisms of Reinforcement Learning and Decision-Making With the hBayesDM Package
title_short Revealing Neurocomputational Mechanisms of Reinforcement Learning and Decision-Making With the hBayesDM Package
title_sort revealing neurocomputational mechanisms of reinforcement learning and decision-making with the hbayesdm package
topic Research
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5869013/
https://www.ncbi.nlm.nih.gov/pubmed/29601060
http://dx.doi.org/10.1162/CPSY_a_00002
work_keys_str_mv AT ahnwooyoung revealingneurocomputationalmechanismsofreinforcementlearninganddecisionmakingwiththehbayesdmpackage
AT hainesnathaniel revealingneurocomputationalmechanismsofreinforcementlearninganddecisionmakingwiththehbayesdmpackage
AT zhanglei revealingneurocomputationalmechanismsofreinforcementlearninganddecisionmakingwiththehbayesdmpackage