Cargando…
A Bayesian Foundation for Individual Learning Under Uncertainty
Computational learning models are critical for understanding mechanisms of adaptive behavior. However, the two major current frameworks, reinforcement learning (RL) and Bayesian learning, both have certain limitations. For example, many Bayesian models are agnostic of inter-individual variability an...
Autores principales: | , , , |
---|---|
Formato: | Texto |
Lenguaje: | English |
Publicado: |
Frontiers Research Foundation
2011
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3096853/ https://www.ncbi.nlm.nih.gov/pubmed/21629826 http://dx.doi.org/10.3389/fnhum.2011.00039 |
_version_ | 1782203754955669504 |
---|---|
author | Mathys, Christoph Daunizeau, Jean Friston, Karl J. Stephan, Klaas E. |
author_facet | Mathys, Christoph Daunizeau, Jean Friston, Karl J. Stephan, Klaas E. |
author_sort | Mathys, Christoph |
collection | PubMed |
description | Computational learning models are critical for understanding mechanisms of adaptive behavior. However, the two major current frameworks, reinforcement learning (RL) and Bayesian learning, both have certain limitations. For example, many Bayesian models are agnostic of inter-individual variability and involve complicated integrals, making online learning difficult. Here, we introduce a generic hierarchical Bayesian framework for individual learning under multiple forms of uncertainty (e.g., environmental volatility and perceptual uncertainty). The model assumes Gaussian random walks of states at all but the first level, with the step size determined by the next highest level. The coupling between levels is controlled by parameters that shape the influence of uncertainty on learning in a subject-specific fashion. Using variational Bayes under a mean-field approximation and a novel approximation to the posterior energy function, we derive trial-by-trial update equations which (i) are analytical and extremely efficient, enabling real-time learning, (ii) have a natural interpretation in terms of RL, and (iii) contain parameters representing processes which play a key role in current theories of learning, e.g., precision-weighting of prediction error. These parameters allow for the expression of individual differences in learning and may relate to specific neuromodulatory mechanisms in the brain. Our model is very general: it can deal with both discrete and continuous states and equally accounts for deterministic and probabilistic relations between environmental events and perceptual states (i.e., situations with and without perceptual uncertainty). These properties are illustrated by simulations and analyses of empirical time series. Overall, our framework provides a novel foundation for understanding normal and pathological learning that contextualizes RL within a generic Bayesian scheme and thus connects it to principles of optimality from probability theory. |
format | Text |
id | pubmed-3096853 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2011 |
publisher | Frontiers Research Foundation |
record_format | MEDLINE/PubMed |
spelling | pubmed-30968532011-05-31 A Bayesian Foundation for Individual Learning Under Uncertainty Mathys, Christoph Daunizeau, Jean Friston, Karl J. Stephan, Klaas E. Front Hum Neurosci Neuroscience Computational learning models are critical for understanding mechanisms of adaptive behavior. However, the two major current frameworks, reinforcement learning (RL) and Bayesian learning, both have certain limitations. For example, many Bayesian models are agnostic of inter-individual variability and involve complicated integrals, making online learning difficult. Here, we introduce a generic hierarchical Bayesian framework for individual learning under multiple forms of uncertainty (e.g., environmental volatility and perceptual uncertainty). The model assumes Gaussian random walks of states at all but the first level, with the step size determined by the next highest level. The coupling between levels is controlled by parameters that shape the influence of uncertainty on learning in a subject-specific fashion. Using variational Bayes under a mean-field approximation and a novel approximation to the posterior energy function, we derive trial-by-trial update equations which (i) are analytical and extremely efficient, enabling real-time learning, (ii) have a natural interpretation in terms of RL, and (iii) contain parameters representing processes which play a key role in current theories of learning, e.g., precision-weighting of prediction error. These parameters allow for the expression of individual differences in learning and may relate to specific neuromodulatory mechanisms in the brain. Our model is very general: it can deal with both discrete and continuous states and equally accounts for deterministic and probabilistic relations between environmental events and perceptual states (i.e., situations with and without perceptual uncertainty). These properties are illustrated by simulations and analyses of empirical time series. Overall, our framework provides a novel foundation for understanding normal and pathological learning that contextualizes RL within a generic Bayesian scheme and thus connects it to principles of optimality from probability theory. Frontiers Research Foundation 2011-05-02 /pmc/articles/PMC3096853/ /pubmed/21629826 http://dx.doi.org/10.3389/fnhum.2011.00039 Text en Copyright © 2011 Mathys, Daunizeau, Friston and Stephan. http://www.frontiersin.org/licenseagreement This is an open-access article subject to a non-exclusive license between the authors and Frontiers Media SA, which permits use, distribution and reproduction in other forums, provided the original authors and source are credited and other Frontiers conditions are complied with. |
spellingShingle | Neuroscience Mathys, Christoph Daunizeau, Jean Friston, Karl J. Stephan, Klaas E. A Bayesian Foundation for Individual Learning Under Uncertainty |
title | A Bayesian Foundation for Individual Learning Under Uncertainty |
title_full | A Bayesian Foundation for Individual Learning Under Uncertainty |
title_fullStr | A Bayesian Foundation for Individual Learning Under Uncertainty |
title_full_unstemmed | A Bayesian Foundation for Individual Learning Under Uncertainty |
title_short | A Bayesian Foundation for Individual Learning Under Uncertainty |
title_sort | bayesian foundation for individual learning under uncertainty |
topic | Neuroscience |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3096853/ https://www.ncbi.nlm.nih.gov/pubmed/21629826 http://dx.doi.org/10.3389/fnhum.2011.00039 |
work_keys_str_mv | AT mathyschristoph abayesianfoundationforindividuallearningunderuncertainty AT daunizeaujean abayesianfoundationforindividuallearningunderuncertainty AT fristonkarlj abayesianfoundationforindividuallearningunderuncertainty AT stephanklaase abayesianfoundationforindividuallearningunderuncertainty AT mathyschristoph bayesianfoundationforindividuallearningunderuncertainty AT daunizeaujean bayesianfoundationforindividuallearningunderuncertainty AT fristonkarlj bayesianfoundationforindividuallearningunderuncertainty AT stephanklaase bayesianfoundationforindividuallearningunderuncertainty |