Cargando…
Predictive representations can link model-based reinforcement learning to model-free mechanisms
Humans and animals are capable of evaluating actions by considering their long-run future rewards through a process described using model-based reinforcement learning (RL) algorithms. The mechanisms by which neural circuits perform the computations prescribed by model-based RL remain largely unknown...
Autores principales: | Russek, Evan M., Momennejad, Ida, Botvinick, Matthew M., Gershman, Samuel J., Daw, Nathaniel D. |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Public Library of Science
2017
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5628940/ https://www.ncbi.nlm.nih.gov/pubmed/28945743 http://dx.doi.org/10.1371/journal.pcbi.1005768 |
Ejemplares similares
-
Offline replay supports planning in human reinforcement learning
por: Momennejad, Ida, et al.
Publicado: (2018) -
Time representation in reinforcement learning models of the basal ganglia
por: Gershman, Samuel J., et al.
Publicado: (2014) -
Anxiety, avoidance, and sequential evaluation
por: Zorowitz, Samuel, et al.
Publicado: (2020) -
Extraversion differentiates between model-based and model-free strategies in a reinforcement learning task
por: Skatova, Anya, et al.
Publicado: (2013) -
Model-based hierarchical reinforcement learning and human action control
por: Botvinick, Matthew, et al.
Publicado: (2014)