Cargando…

Retrospective Inference as a Form of Bounded Rationality, and Its Beneficial Influence on Learning

Probabilistic models of cognition typically assume that agents make inferences about current states by combining new sensory information with fixed beliefs about the past, an approach known as Bayesian filtering. This is computationally parsimonious, but, in general, leads to suboptimal beliefs abou...

Descripción completa

Detalles Bibliográficos
Autores principales: FitzGerald, Thomas H. B., Penny, Will D., Bonnici, Heidi M., Adams, Rick A.
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Frontiers Media S.A. 2020
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7861256/
https://www.ncbi.nlm.nih.gov/pubmed/33733122
http://dx.doi.org/10.3389/frai.2020.00002
_version_ 1783647046427738112
author FitzGerald, Thomas H. B.
Penny, Will D.
Bonnici, Heidi M.
Adams, Rick A.
author_facet FitzGerald, Thomas H. B.
Penny, Will D.
Bonnici, Heidi M.
Adams, Rick A.
author_sort FitzGerald, Thomas H. B.
collection PubMed
description Probabilistic models of cognition typically assume that agents make inferences about current states by combining new sensory information with fixed beliefs about the past, an approach known as Bayesian filtering. This is computationally parsimonious, but, in general, leads to suboptimal beliefs about past states, since it ignores the fact that new observations typically contain information about the past as well as the present. This is disadvantageous both because knowledge of past states may be intrinsically valuable, and because it impairs learning about fixed or slowly changing parameters of the environment. For these reasons, in offline data analysis it is usual to infer on every set of states using the entire time series of observations, an approach known as (fixed-interval) Bayesian smoothing. Unfortunately, however, this is impractical for real agents, since it requires the maintenance and updating of beliefs about an ever-growing set of states. We propose an intermediate approach, finite retrospective inference (FRI), in which agents perform update beliefs about a limited number of past states (Formally, this represents online fixed-lag smoothing with a sliding window). This can be seen as a form of bounded rationality in which agents seek to optimize the accuracy of their beliefs subject to computational and other resource costs. We show through simulation that this approach has the capacity to significantly increase the accuracy of both inference and learning, using a simple variational scheme applied to both randomly generated Hidden Markov models (HMMs), and a specific application of the HMM, in the form of the widely used probabilistic reversal task. Our proposal thus constitutes a theoretical contribution to normative accounts of bounded rationality, which makes testable empirical predictions that can be explored in future work.
format Online
Article
Text
id pubmed-7861256
institution National Center for Biotechnology Information
language English
publishDate 2020
publisher Frontiers Media S.A.
record_format MEDLINE/PubMed
spelling pubmed-78612562021-03-16 Retrospective Inference as a Form of Bounded Rationality, and Its Beneficial Influence on Learning FitzGerald, Thomas H. B. Penny, Will D. Bonnici, Heidi M. Adams, Rick A. Front Artif Intell Artificial Intelligence Probabilistic models of cognition typically assume that agents make inferences about current states by combining new sensory information with fixed beliefs about the past, an approach known as Bayesian filtering. This is computationally parsimonious, but, in general, leads to suboptimal beliefs about past states, since it ignores the fact that new observations typically contain information about the past as well as the present. This is disadvantageous both because knowledge of past states may be intrinsically valuable, and because it impairs learning about fixed or slowly changing parameters of the environment. For these reasons, in offline data analysis it is usual to infer on every set of states using the entire time series of observations, an approach known as (fixed-interval) Bayesian smoothing. Unfortunately, however, this is impractical for real agents, since it requires the maintenance and updating of beliefs about an ever-growing set of states. We propose an intermediate approach, finite retrospective inference (FRI), in which agents perform update beliefs about a limited number of past states (Formally, this represents online fixed-lag smoothing with a sliding window). This can be seen as a form of bounded rationality in which agents seek to optimize the accuracy of their beliefs subject to computational and other resource costs. We show through simulation that this approach has the capacity to significantly increase the accuracy of both inference and learning, using a simple variational scheme applied to both randomly generated Hidden Markov models (HMMs), and a specific application of the HMM, in the form of the widely used probabilistic reversal task. Our proposal thus constitutes a theoretical contribution to normative accounts of bounded rationality, which makes testable empirical predictions that can be explored in future work. Frontiers Media S.A. 2020-02-18 /pmc/articles/PMC7861256/ /pubmed/33733122 http://dx.doi.org/10.3389/frai.2020.00002 Text en Copyright © 2020 FitzGerald, Penny, Bonnici and Adams. http://creativecommons.org/licenses/by/4.0/ This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
spellingShingle Artificial Intelligence
FitzGerald, Thomas H. B.
Penny, Will D.
Bonnici, Heidi M.
Adams, Rick A.
Retrospective Inference as a Form of Bounded Rationality, and Its Beneficial Influence on Learning
title Retrospective Inference as a Form of Bounded Rationality, and Its Beneficial Influence on Learning
title_full Retrospective Inference as a Form of Bounded Rationality, and Its Beneficial Influence on Learning
title_fullStr Retrospective Inference as a Form of Bounded Rationality, and Its Beneficial Influence on Learning
title_full_unstemmed Retrospective Inference as a Form of Bounded Rationality, and Its Beneficial Influence on Learning
title_short Retrospective Inference as a Form of Bounded Rationality, and Its Beneficial Influence on Learning
title_sort retrospective inference as a form of bounded rationality, and its beneficial influence on learning
topic Artificial Intelligence
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7861256/
https://www.ncbi.nlm.nih.gov/pubmed/33733122
http://dx.doi.org/10.3389/frai.2020.00002
work_keys_str_mv AT fitzgeraldthomashb retrospectiveinferenceasaformofboundedrationalityanditsbeneficialinfluenceonlearning
AT pennywilld retrospectiveinferenceasaformofboundedrationalityanditsbeneficialinfluenceonlearning
AT bonniciheidim retrospectiveinferenceasaformofboundedrationalityanditsbeneficialinfluenceonlearning
AT adamsricka retrospectiveinferenceasaformofboundedrationalityanditsbeneficialinfluenceonlearning