Cargando…

Learning to Estimate Dynamical State with Probabilistic Population Codes

Tracking moving objects, including one’s own body, is a fundamental ability of higher organisms, playing a central role in many perceptual and motor tasks. While it is unknown how the brain learns to follow and predict the dynamics of objects, it is known that this process of state estimation can be...

Descripción completa

Detalles Bibliográficos
Autores principales: Makin, Joseph G., Dichter, Benjamin K., Sabes, Philip N.
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Public Library of Science 2015
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4634970/
https://www.ncbi.nlm.nih.gov/pubmed/26540152
http://dx.doi.org/10.1371/journal.pcbi.1004554
_version_ 1782399449892388864
author Makin, Joseph G.
Dichter, Benjamin K.
Sabes, Philip N.
author_facet Makin, Joseph G.
Dichter, Benjamin K.
Sabes, Philip N.
author_sort Makin, Joseph G.
collection PubMed
description Tracking moving objects, including one’s own body, is a fundamental ability of higher organisms, playing a central role in many perceptual and motor tasks. While it is unknown how the brain learns to follow and predict the dynamics of objects, it is known that this process of state estimation can be learned purely from the statistics of noisy observations. When the dynamics are simply linear with additive Gaussian noise, the optimal solution is the well known Kalman filter (KF), the parameters of which can be learned via latent-variable density estimation (the EM algorithm). The brain does not, however, directly manipulate matrices and vectors, but instead appears to represent probability distributions with the firing rates of population of neurons, “probabilistic population codes.” We show that a recurrent neural network—a modified form of an exponential family harmonium (EFH)—that takes a linear probabilistic population code as input can learn, without supervision, to estimate the state of a linear dynamical system. After observing a series of population responses (spike counts) to the position of a moving object, the network learns to represent the velocity of the object and forms nearly optimal predictions about the position at the next time-step. This result builds on our previous work showing that a similar network can learn to perform multisensory integration and coordinate transformations for static stimuli. The receptive fields of the trained network also make qualitative predictions about the developing and learning brain: tuning gradually emerges for higher-order dynamical states not explicitly present in the inputs, appearing as delayed tuning for the lower-order states.
format Online
Article
Text
id pubmed-4634970
institution National Center for Biotechnology Information
language English
publishDate 2015
publisher Public Library of Science
record_format MEDLINE/PubMed
spelling pubmed-46349702015-11-13 Learning to Estimate Dynamical State with Probabilistic Population Codes Makin, Joseph G. Dichter, Benjamin K. Sabes, Philip N. PLoS Comput Biol Research Article Tracking moving objects, including one’s own body, is a fundamental ability of higher organisms, playing a central role in many perceptual and motor tasks. While it is unknown how the brain learns to follow and predict the dynamics of objects, it is known that this process of state estimation can be learned purely from the statistics of noisy observations. When the dynamics are simply linear with additive Gaussian noise, the optimal solution is the well known Kalman filter (KF), the parameters of which can be learned via latent-variable density estimation (the EM algorithm). The brain does not, however, directly manipulate matrices and vectors, but instead appears to represent probability distributions with the firing rates of population of neurons, “probabilistic population codes.” We show that a recurrent neural network—a modified form of an exponential family harmonium (EFH)—that takes a linear probabilistic population code as input can learn, without supervision, to estimate the state of a linear dynamical system. After observing a series of population responses (spike counts) to the position of a moving object, the network learns to represent the velocity of the object and forms nearly optimal predictions about the position at the next time-step. This result builds on our previous work showing that a similar network can learn to perform multisensory integration and coordinate transformations for static stimuli. The receptive fields of the trained network also make qualitative predictions about the developing and learning brain: tuning gradually emerges for higher-order dynamical states not explicitly present in the inputs, appearing as delayed tuning for the lower-order states. Public Library of Science 2015-11-05 /pmc/articles/PMC4634970/ /pubmed/26540152 http://dx.doi.org/10.1371/journal.pcbi.1004554 Text en © 2015 Makin et al http://creativecommons.org/licenses/by/4.0/ This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are properly credited.
spellingShingle Research Article
Makin, Joseph G.
Dichter, Benjamin K.
Sabes, Philip N.
Learning to Estimate Dynamical State with Probabilistic Population Codes
title Learning to Estimate Dynamical State with Probabilistic Population Codes
title_full Learning to Estimate Dynamical State with Probabilistic Population Codes
title_fullStr Learning to Estimate Dynamical State with Probabilistic Population Codes
title_full_unstemmed Learning to Estimate Dynamical State with Probabilistic Population Codes
title_short Learning to Estimate Dynamical State with Probabilistic Population Codes
title_sort learning to estimate dynamical state with probabilistic population codes
topic Research Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4634970/
https://www.ncbi.nlm.nih.gov/pubmed/26540152
http://dx.doi.org/10.1371/journal.pcbi.1004554
work_keys_str_mv AT makinjosephg learningtoestimatedynamicalstatewithprobabilisticpopulationcodes
AT dichterbenjamink learningtoestimatedynamicalstatewithprobabilisticpopulationcodes
AT sabesphilipn learningtoestimatedynamicalstatewithprobabilisticpopulationcodes