Cargando…
Planning to Learn: A Novel Algorithm for Active Learning during Model-Based Planning
Active Inference is a recently developed framework for modeling decision processes under uncertainty. Over the last several years, empirical and theoretical work has begun to evaluate the strengths and weaknesses of this approach and how it might be extended and improved. One recent extension is the...
Autores principales: | , , , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Cornell University
2023
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10462173/ https://www.ncbi.nlm.nih.gov/pubmed/37645053 |
_version_ | 1785098002084397056 |
---|---|
author | Hodson, Rowan Bassett, Bruce van Hoof, Charel Rosman, Benjamin Solms, Mark Shock, Jonathan P. Smith, Ryan |
author_facet | Hodson, Rowan Bassett, Bruce van Hoof, Charel Rosman, Benjamin Solms, Mark Shock, Jonathan P. Smith, Ryan |
author_sort | Hodson, Rowan |
collection | PubMed |
description | Active Inference is a recently developed framework for modeling decision processes under uncertainty. Over the last several years, empirical and theoretical work has begun to evaluate the strengths and weaknesses of this approach and how it might be extended and improved. One recent extension is the “sophisticated inference” (SI) algorithm, which improves performance on multi-step planning problems through a recursive decision tree search. However, little work to date has been done to compare SI to other established planning algorithms in reinforcement learning (RL). In addition, SI was developed with a focus on inference as opposed to learning. The present paper therefore has two aims. First, we compare performance of SI to Bayesian RL schemes designed to solve similar problems. Second, we present and compare an extension of SI - sophisticated learning (SL) - that more fully incorporates active learning during planning. SL maintains beliefs about how model parameters would change under the future observations expected under each policy. This allows a form of counterfactual retrospective inference in which the agent considers what could be learned from current or past observations given different future observations. To accomplish these aims, we make use of a novel, biologically inspired environment that requires an optimal balance between goal-seeking and active learning, and which was designed to highlight the problem structure for which SL offers a unique solution. This setup requires an agent to continually search an open environment for available (but changing) resources in the presence of competing affordances for information gain. Our simulations demonstrate that SL outperforms all other algorithms in this context - most notably, Bayes-adaptive RL and upper confidence bound (UCB) algorithms, which aim to solve multi-step planning problems using similar principles (i.e., directed exploration and counterfactual reasoning about belief updates given different possible actions/observations). These results provide added support for the utility of Active Inference in solving this class of biologically-relevant problems and offer added tools for testing hypotheses about human cognition. |
format | Online Article Text |
id | pubmed-10462173 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2023 |
publisher | Cornell University |
record_format | MEDLINE/PubMed |
spelling | pubmed-104621732023-08-29 Planning to Learn: A Novel Algorithm for Active Learning during Model-Based Planning Hodson, Rowan Bassett, Bruce van Hoof, Charel Rosman, Benjamin Solms, Mark Shock, Jonathan P. Smith, Ryan ArXiv Article Active Inference is a recently developed framework for modeling decision processes under uncertainty. Over the last several years, empirical and theoretical work has begun to evaluate the strengths and weaknesses of this approach and how it might be extended and improved. One recent extension is the “sophisticated inference” (SI) algorithm, which improves performance on multi-step planning problems through a recursive decision tree search. However, little work to date has been done to compare SI to other established planning algorithms in reinforcement learning (RL). In addition, SI was developed with a focus on inference as opposed to learning. The present paper therefore has two aims. First, we compare performance of SI to Bayesian RL schemes designed to solve similar problems. Second, we present and compare an extension of SI - sophisticated learning (SL) - that more fully incorporates active learning during planning. SL maintains beliefs about how model parameters would change under the future observations expected under each policy. This allows a form of counterfactual retrospective inference in which the agent considers what could be learned from current or past observations given different future observations. To accomplish these aims, we make use of a novel, biologically inspired environment that requires an optimal balance between goal-seeking and active learning, and which was designed to highlight the problem structure for which SL offers a unique solution. This setup requires an agent to continually search an open environment for available (but changing) resources in the presence of competing affordances for information gain. Our simulations demonstrate that SL outperforms all other algorithms in this context - most notably, Bayes-adaptive RL and upper confidence bound (UCB) algorithms, which aim to solve multi-step planning problems using similar principles (i.e., directed exploration and counterfactual reasoning about belief updates given different possible actions/observations). These results provide added support for the utility of Active Inference in solving this class of biologically-relevant problems and offer added tools for testing hypotheses about human cognition. Cornell University 2023-08-15 /pmc/articles/PMC10462173/ /pubmed/37645053 Text en https://creativecommons.org/licenses/by-nc-nd/4.0/This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License (https://creativecommons.org/licenses/by-nc-nd/4.0/) , which allows reusers to copy and distribute the material in any medium or format in unadapted form only, for noncommercial purposes only, and only so long as attribution is given to the creator. |
spellingShingle | Article Hodson, Rowan Bassett, Bruce van Hoof, Charel Rosman, Benjamin Solms, Mark Shock, Jonathan P. Smith, Ryan Planning to Learn: A Novel Algorithm for Active Learning during Model-Based Planning |
title | Planning to Learn: A Novel Algorithm for Active Learning during Model-Based Planning |
title_full | Planning to Learn: A Novel Algorithm for Active Learning during Model-Based Planning |
title_fullStr | Planning to Learn: A Novel Algorithm for Active Learning during Model-Based Planning |
title_full_unstemmed | Planning to Learn: A Novel Algorithm for Active Learning during Model-Based Planning |
title_short | Planning to Learn: A Novel Algorithm for Active Learning during Model-Based Planning |
title_sort | planning to learn: a novel algorithm for active learning during model-based planning |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10462173/ https://www.ncbi.nlm.nih.gov/pubmed/37645053 |
work_keys_str_mv | AT hodsonrowan planningtolearnanovelalgorithmforactivelearningduringmodelbasedplanning AT bassettbruce planningtolearnanovelalgorithmforactivelearningduringmodelbasedplanning AT vanhoofcharel planningtolearnanovelalgorithmforactivelearningduringmodelbasedplanning AT rosmanbenjamin planningtolearnanovelalgorithmforactivelearningduringmodelbasedplanning AT solmsmark planningtolearnanovelalgorithmforactivelearningduringmodelbasedplanning AT shockjonathanp planningtolearnanovelalgorithmforactivelearningduringmodelbasedplanning AT smithryan planningtolearnanovelalgorithmforactivelearningduringmodelbasedplanning |