Cargando…

Model Reduction Through Progressive Latent Space Pruning in Deep Active Inference

Although still not fully understood, sleep is known to play an important role in learning and in pruning synaptic connections. From the active inference perspective, this can be cast as learning parameters of a generative model and Bayesian model reduction, respectively. In this article, we show how...

Descripción completa

Detalles Bibliográficos
Autores principales: Wauthier, Samuel T., De Boom, Cedric, Çatal, Ozan, Verbelen, Tim, Dhoedt, Bart
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Frontiers Media S.A. 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8961807/
https://www.ncbi.nlm.nih.gov/pubmed/35360827
http://dx.doi.org/10.3389/fnbot.2022.795846
_version_ 1784677677156794368
author Wauthier, Samuel T.
De Boom, Cedric
Çatal, Ozan
Verbelen, Tim
Dhoedt, Bart
author_facet Wauthier, Samuel T.
De Boom, Cedric
Çatal, Ozan
Verbelen, Tim
Dhoedt, Bart
author_sort Wauthier, Samuel T.
collection PubMed
description Although still not fully understood, sleep is known to play an important role in learning and in pruning synaptic connections. From the active inference perspective, this can be cast as learning parameters of a generative model and Bayesian model reduction, respectively. In this article, we show how to reduce dimensionality of the latent space of such a generative model, and hence model complexity, in deep active inference during training through a similar process. While deep active inference uses deep neural networks for state space construction, an issue remains in that the dimensionality of the latent space must be specified beforehand. We investigate two methods that are able to prune the latent space of deep active inference models. The first approach functions similar to sleep and performs model reduction post hoc. The second approach is a novel method which is more similar to reflection, operates during training and displays “aha” moments when the model is able to reduce latent space dimensionality. We show for two well-known simulated environments that model performance is retained in the first approach and only diminishes slightly in the second approach. We also show that reconstructions from a real world example are indistinguishable before and after reduction. We conclude that the most important difference constitutes a trade-off between training time and model performance in terms of accuracy and the ability to generalize, via minimization of model complexity.
format Online
Article
Text
id pubmed-8961807
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher Frontiers Media S.A.
record_format MEDLINE/PubMed
spelling pubmed-89618072022-03-30 Model Reduction Through Progressive Latent Space Pruning in Deep Active Inference Wauthier, Samuel T. De Boom, Cedric Çatal, Ozan Verbelen, Tim Dhoedt, Bart Front Neurorobot Neuroscience Although still not fully understood, sleep is known to play an important role in learning and in pruning synaptic connections. From the active inference perspective, this can be cast as learning parameters of a generative model and Bayesian model reduction, respectively. In this article, we show how to reduce dimensionality of the latent space of such a generative model, and hence model complexity, in deep active inference during training through a similar process. While deep active inference uses deep neural networks for state space construction, an issue remains in that the dimensionality of the latent space must be specified beforehand. We investigate two methods that are able to prune the latent space of deep active inference models. The first approach functions similar to sleep and performs model reduction post hoc. The second approach is a novel method which is more similar to reflection, operates during training and displays “aha” moments when the model is able to reduce latent space dimensionality. We show for two well-known simulated environments that model performance is retained in the first approach and only diminishes slightly in the second approach. We also show that reconstructions from a real world example are indistinguishable before and after reduction. We conclude that the most important difference constitutes a trade-off between training time and model performance in terms of accuracy and the ability to generalize, via minimization of model complexity. Frontiers Media S.A. 2022-03-11 /pmc/articles/PMC8961807/ /pubmed/35360827 http://dx.doi.org/10.3389/fnbot.2022.795846 Text en Copyright © 2022 Wauthier, De Boom, Çatal, Verbelen and Dhoedt. https://creativecommons.org/licenses/by/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
spellingShingle Neuroscience
Wauthier, Samuel T.
De Boom, Cedric
Çatal, Ozan
Verbelen, Tim
Dhoedt, Bart
Model Reduction Through Progressive Latent Space Pruning in Deep Active Inference
title Model Reduction Through Progressive Latent Space Pruning in Deep Active Inference
title_full Model Reduction Through Progressive Latent Space Pruning in Deep Active Inference
title_fullStr Model Reduction Through Progressive Latent Space Pruning in Deep Active Inference
title_full_unstemmed Model Reduction Through Progressive Latent Space Pruning in Deep Active Inference
title_short Model Reduction Through Progressive Latent Space Pruning in Deep Active Inference
title_sort model reduction through progressive latent space pruning in deep active inference
topic Neuroscience
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8961807/
https://www.ncbi.nlm.nih.gov/pubmed/35360827
http://dx.doi.org/10.3389/fnbot.2022.795846
work_keys_str_mv AT wauthiersamuelt modelreductionthroughprogressivelatentspacepruningindeepactiveinference
AT deboomcedric modelreductionthroughprogressivelatentspacepruningindeepactiveinference
AT catalozan modelreductionthroughprogressivelatentspacepruningindeepactiveinference
AT verbelentim modelreductionthroughprogressivelatentspacepruningindeepactiveinference
AT dhoedtbart modelreductionthroughprogressivelatentspacepruningindeepactiveinference