Cargando…

An Active Inference Approach to Modeling Structure Learning: Concept Learning as an Example Case

Within computational neuroscience, the algorithmic and neural basis of structure learning remains poorly understood. Concept learning is one primary example, which requires both a type of internal model expansion process (adding novel hidden states that explain new observations), and a model reducti...

Descripción completa

Detalles Bibliográficos
Autores principales: Smith, Ryan, Schwartenbeck, Philipp, Parr, Thomas, Friston, Karl J.
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Frontiers Media S.A. 2020
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7250191/
https://www.ncbi.nlm.nih.gov/pubmed/32508611
http://dx.doi.org/10.3389/fncom.2020.00041
_version_ 1783538720140427264
author Smith, Ryan
Schwartenbeck, Philipp
Parr, Thomas
Friston, Karl J.
author_facet Smith, Ryan
Schwartenbeck, Philipp
Parr, Thomas
Friston, Karl J.
author_sort Smith, Ryan
collection PubMed
description Within computational neuroscience, the algorithmic and neural basis of structure learning remains poorly understood. Concept learning is one primary example, which requires both a type of internal model expansion process (adding novel hidden states that explain new observations), and a model reduction process (merging different states into one underlying cause and thus reducing model complexity via meta-learning). Although various algorithmic models of concept learning have been proposed within machine learning and cognitive science, many are limited to various degrees by an inability to generalize, the need for very large amounts of training data, and/or insufficiently established biological plausibility. Using concept learning as an example case, we introduce a novel approach for modeling structure learning—and specifically state-space expansion and reduction—within the active inference framework and its accompanying neural process theory. Our aim is to demonstrate its potential to facilitate a novel line of active inference research in this area. The approach we lay out is based on the idea that a generative model can be equipped with extra (hidden state or cause) “slots” that can be engaged when an agent learns about novel concepts. This can be combined with a Bayesian model reduction process, in which any concept learning—associated with these slots—can be reset in favor of a simpler model with higher model evidence. We use simulations to illustrate this model's ability to add new concepts to its state space (with relatively few observations) and increase the granularity of the concepts it currently possesses. We also simulate the predicted neural basis of these processes. We further show that it can accomplish a simple form of “one-shot” generalization to new stimuli. Although deliberately simple, these simulation results highlight ways in which active inference could offer useful resources in developing neurocomputational models of structure learning. They provide a template for how future active inference research could apply this approach to real-world structure learning problems and assess the added utility it may offer.
format Online
Article
Text
id pubmed-7250191
institution National Center for Biotechnology Information
language English
publishDate 2020
publisher Frontiers Media S.A.
record_format MEDLINE/PubMed
spelling pubmed-72501912020-06-05 An Active Inference Approach to Modeling Structure Learning: Concept Learning as an Example Case Smith, Ryan Schwartenbeck, Philipp Parr, Thomas Friston, Karl J. Front Comput Neurosci Neuroscience Within computational neuroscience, the algorithmic and neural basis of structure learning remains poorly understood. Concept learning is one primary example, which requires both a type of internal model expansion process (adding novel hidden states that explain new observations), and a model reduction process (merging different states into one underlying cause and thus reducing model complexity via meta-learning). Although various algorithmic models of concept learning have been proposed within machine learning and cognitive science, many are limited to various degrees by an inability to generalize, the need for very large amounts of training data, and/or insufficiently established biological plausibility. Using concept learning as an example case, we introduce a novel approach for modeling structure learning—and specifically state-space expansion and reduction—within the active inference framework and its accompanying neural process theory. Our aim is to demonstrate its potential to facilitate a novel line of active inference research in this area. The approach we lay out is based on the idea that a generative model can be equipped with extra (hidden state or cause) “slots” that can be engaged when an agent learns about novel concepts. This can be combined with a Bayesian model reduction process, in which any concept learning—associated with these slots—can be reset in favor of a simpler model with higher model evidence. We use simulations to illustrate this model's ability to add new concepts to its state space (with relatively few observations) and increase the granularity of the concepts it currently possesses. We also simulate the predicted neural basis of these processes. We further show that it can accomplish a simple form of “one-shot” generalization to new stimuli. Although deliberately simple, these simulation results highlight ways in which active inference could offer useful resources in developing neurocomputational models of structure learning. They provide a template for how future active inference research could apply this approach to real-world structure learning problems and assess the added utility it may offer. Frontiers Media S.A. 2020-05-19 /pmc/articles/PMC7250191/ /pubmed/32508611 http://dx.doi.org/10.3389/fncom.2020.00041 Text en Copyright © 2020 Smith, Schwartenbeck, Parr and Friston. http://creativecommons.org/licenses/by/4.0/ This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
spellingShingle Neuroscience
Smith, Ryan
Schwartenbeck, Philipp
Parr, Thomas
Friston, Karl J.
An Active Inference Approach to Modeling Structure Learning: Concept Learning as an Example Case
title An Active Inference Approach to Modeling Structure Learning: Concept Learning as an Example Case
title_full An Active Inference Approach to Modeling Structure Learning: Concept Learning as an Example Case
title_fullStr An Active Inference Approach to Modeling Structure Learning: Concept Learning as an Example Case
title_full_unstemmed An Active Inference Approach to Modeling Structure Learning: Concept Learning as an Example Case
title_short An Active Inference Approach to Modeling Structure Learning: Concept Learning as an Example Case
title_sort active inference approach to modeling structure learning: concept learning as an example case
topic Neuroscience
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7250191/
https://www.ncbi.nlm.nih.gov/pubmed/32508611
http://dx.doi.org/10.3389/fncom.2020.00041
work_keys_str_mv AT smithryan anactiveinferenceapproachtomodelingstructurelearningconceptlearningasanexamplecase
AT schwartenbeckphilipp anactiveinferenceapproachtomodelingstructurelearningconceptlearningasanexamplecase
AT parrthomas anactiveinferenceapproachtomodelingstructurelearningconceptlearningasanexamplecase
AT fristonkarlj anactiveinferenceapproachtomodelingstructurelearningconceptlearningasanexamplecase
AT smithryan activeinferenceapproachtomodelingstructurelearningconceptlearningasanexamplecase
AT schwartenbeckphilipp activeinferenceapproachtomodelingstructurelearningconceptlearningasanexamplecase
AT parrthomas activeinferenceapproachtomodelingstructurelearningconceptlearningasanexamplecase
AT fristonkarlj activeinferenceapproachtomodelingstructurelearningconceptlearningasanexamplecase