Cargando…

Learning and exploration in action-perception loops

Discovering the structure underlying observed data is a recurring problem in machine learning with important applications in neuroscience. It is also a primary function of the brain. When data can be actively collected in the context of a closed action-perception loop, behavior becomes a critical de...

Descripción completa

Detalles Bibliográficos
Autores principales: Little, Daniel Y., Sommer, Friedrich T.
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Frontiers Media S.A. 2013
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3619626/
https://www.ncbi.nlm.nih.gov/pubmed/23579347
http://dx.doi.org/10.3389/fncir.2013.00037
_version_ 1782265497592528896
author Little, Daniel Y.
Sommer, Friedrich T.
author_facet Little, Daniel Y.
Sommer, Friedrich T.
author_sort Little, Daniel Y.
collection PubMed
description Discovering the structure underlying observed data is a recurring problem in machine learning with important applications in neuroscience. It is also a primary function of the brain. When data can be actively collected in the context of a closed action-perception loop, behavior becomes a critical determinant of learning efficiency. Psychologists studying exploration and curiosity in humans and animals have long argued that learning itself is a primary motivator of behavior. However, the theoretical basis of learning-driven behavior is not well understood. Previous computational studies of behavior have largely focused on the control problem of maximizing acquisition of rewards and have treated learning the structure of data as a secondary objective. Here, we study exploration in the absence of external reward feedback. Instead, we take the quality of an agent's learned internal model to be the primary objective. In a simple probabilistic framework, we derive a Bayesian estimate for the amount of information about the environment an agent can expect to receive by taking an action, a measure we term the predicted information gain (PIG). We develop exploration strategies that approximately maximize PIG. One strategy based on value-iteration consistently learns faster than previously developed reward-free exploration strategies across a diverse range of environments. Psychologists believe the evolutionary advantage of learning-driven exploration lies in the generalized utility of an accurate internal model. Consistent with this hypothesis, we demonstrate that agents which learn more efficiently during exploration are later better able to accomplish a range of goal-directed tasks. We will conclude by discussing how our work elucidates the explorative behaviors of animals and humans, its relationship to other computational models of behavior, and its potential application to experimental design, such as in closed-loop neurophysiology studies.
format Online
Article
Text
id pubmed-3619626
institution National Center for Biotechnology Information
language English
publishDate 2013
publisher Frontiers Media S.A.
record_format MEDLINE/PubMed
spelling pubmed-36196262013-04-11 Learning and exploration in action-perception loops Little, Daniel Y. Sommer, Friedrich T. Front Neural Circuits Neuroscience Discovering the structure underlying observed data is a recurring problem in machine learning with important applications in neuroscience. It is also a primary function of the brain. When data can be actively collected in the context of a closed action-perception loop, behavior becomes a critical determinant of learning efficiency. Psychologists studying exploration and curiosity in humans and animals have long argued that learning itself is a primary motivator of behavior. However, the theoretical basis of learning-driven behavior is not well understood. Previous computational studies of behavior have largely focused on the control problem of maximizing acquisition of rewards and have treated learning the structure of data as a secondary objective. Here, we study exploration in the absence of external reward feedback. Instead, we take the quality of an agent's learned internal model to be the primary objective. In a simple probabilistic framework, we derive a Bayesian estimate for the amount of information about the environment an agent can expect to receive by taking an action, a measure we term the predicted information gain (PIG). We develop exploration strategies that approximately maximize PIG. One strategy based on value-iteration consistently learns faster than previously developed reward-free exploration strategies across a diverse range of environments. Psychologists believe the evolutionary advantage of learning-driven exploration lies in the generalized utility of an accurate internal model. Consistent with this hypothesis, we demonstrate that agents which learn more efficiently during exploration are later better able to accomplish a range of goal-directed tasks. We will conclude by discussing how our work elucidates the explorative behaviors of animals and humans, its relationship to other computational models of behavior, and its potential application to experimental design, such as in closed-loop neurophysiology studies. Frontiers Media S.A. 2013-03-22 /pmc/articles/PMC3619626/ /pubmed/23579347 http://dx.doi.org/10.3389/fncir.2013.00037 Text en Copyright © 2013 Little and Sommer. http://creativecommons.org/licenses/by/3.0/ This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in other forums, provided the original authors and source are credited and subject to any copyright notices concerning any third-party graphics etc.
spellingShingle Neuroscience
Little, Daniel Y.
Sommer, Friedrich T.
Learning and exploration in action-perception loops
title Learning and exploration in action-perception loops
title_full Learning and exploration in action-perception loops
title_fullStr Learning and exploration in action-perception loops
title_full_unstemmed Learning and exploration in action-perception loops
title_short Learning and exploration in action-perception loops
title_sort learning and exploration in action-perception loops
topic Neuroscience
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3619626/
https://www.ncbi.nlm.nih.gov/pubmed/23579347
http://dx.doi.org/10.3389/fncir.2013.00037
work_keys_str_mv AT littledaniely learningandexplorationinactionperceptionloops
AT sommerfriedricht learningandexplorationinactionperceptionloops