Cargando…
Spiking neuron network Helmholtz machine
An increasing amount of behavioral and neurophysiological data suggests that the brain performs optimal (or near-optimal) probabilistic inference and learning during perception and other tasks. Although many machine learning algorithms exist that perform inference and learning in an optimal way, the...
Autores principales: | , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Frontiers Media S.A.
2015
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4405618/ https://www.ncbi.nlm.nih.gov/pubmed/25954191 http://dx.doi.org/10.3389/fncom.2015.00046 |
_version_ | 1782367654920585216 |
---|---|
author | Sountsov, Pavel Miller, Paul |
author_facet | Sountsov, Pavel Miller, Paul |
author_sort | Sountsov, Pavel |
collection | PubMed |
description | An increasing amount of behavioral and neurophysiological data suggests that the brain performs optimal (or near-optimal) probabilistic inference and learning during perception and other tasks. Although many machine learning algorithms exist that perform inference and learning in an optimal way, the complete description of how one of those algorithms (or a novel algorithm) can be implemented in the brain is currently incomplete. There have been many proposed solutions that address how neurons can perform optimal inference but the question of how synaptic plasticity can implement optimal learning is rarely addressed. This paper aims to unify the two fields of probabilistic inference and synaptic plasticity by using a neuronal network of realistic model spiking neurons to implement a well-studied computational model called the Helmholtz Machine. The Helmholtz Machine is amenable to neural implementation as the algorithm it uses to learn its parameters, called the wake-sleep algorithm, uses a local delta learning rule. Our spiking-neuron network implements both the delta rule and a small example of a Helmholtz machine. This neuronal network can learn an internal model of continuous-valued training data sets without supervision. The network can also perform inference on the learned internal models. We show how various biophysical features of the neural implementation constrain the parameters of the wake-sleep algorithm, such as the duration of the wake and sleep phases of learning and the minimal sample duration. We examine the deviations from optimal performance and tie them to the properties of the synaptic plasticity rule. |
format | Online Article Text |
id | pubmed-4405618 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2015 |
publisher | Frontiers Media S.A. |
record_format | MEDLINE/PubMed |
spelling | pubmed-44056182015-05-07 Spiking neuron network Helmholtz machine Sountsov, Pavel Miller, Paul Front Comput Neurosci Neuroscience An increasing amount of behavioral and neurophysiological data suggests that the brain performs optimal (or near-optimal) probabilistic inference and learning during perception and other tasks. Although many machine learning algorithms exist that perform inference and learning in an optimal way, the complete description of how one of those algorithms (or a novel algorithm) can be implemented in the brain is currently incomplete. There have been many proposed solutions that address how neurons can perform optimal inference but the question of how synaptic plasticity can implement optimal learning is rarely addressed. This paper aims to unify the two fields of probabilistic inference and synaptic plasticity by using a neuronal network of realistic model spiking neurons to implement a well-studied computational model called the Helmholtz Machine. The Helmholtz Machine is amenable to neural implementation as the algorithm it uses to learn its parameters, called the wake-sleep algorithm, uses a local delta learning rule. Our spiking-neuron network implements both the delta rule and a small example of a Helmholtz machine. This neuronal network can learn an internal model of continuous-valued training data sets without supervision. The network can also perform inference on the learned internal models. We show how various biophysical features of the neural implementation constrain the parameters of the wake-sleep algorithm, such as the duration of the wake and sleep phases of learning and the minimal sample duration. We examine the deviations from optimal performance and tie them to the properties of the synaptic plasticity rule. Frontiers Media S.A. 2015-04-21 /pmc/articles/PMC4405618/ /pubmed/25954191 http://dx.doi.org/10.3389/fncom.2015.00046 Text en Copyright © 2015 Sountsov and Miller. http://creativecommons.org/licenses/by/4.0/ This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms. |
spellingShingle | Neuroscience Sountsov, Pavel Miller, Paul Spiking neuron network Helmholtz machine |
title | Spiking neuron network Helmholtz machine |
title_full | Spiking neuron network Helmholtz machine |
title_fullStr | Spiking neuron network Helmholtz machine |
title_full_unstemmed | Spiking neuron network Helmholtz machine |
title_short | Spiking neuron network Helmholtz machine |
title_sort | spiking neuron network helmholtz machine |
topic | Neuroscience |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4405618/ https://www.ncbi.nlm.nih.gov/pubmed/25954191 http://dx.doi.org/10.3389/fncom.2015.00046 |
work_keys_str_mv | AT sountsovpavel spikingneuronnetworkhelmholtzmachine AT millerpaul spikingneuronnetworkhelmholtzmachine |