Cargando…
Back-Propagation Learning in Deep Spike-By-Spike Networks
Artificial neural networks (ANNs) are important building blocks in technical applications. They rely on noiseless continuous signals in stark contrast to the discrete action potentials stochastically exchanged among the neurons in real brains. We propose to bridge this gap with Spike-by-Spike (SbS)...
Autores principales: | , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Frontiers Media S.A.
2019
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6700320/ https://www.ncbi.nlm.nih.gov/pubmed/31456677 http://dx.doi.org/10.3389/fncom.2019.00055 |
_version_ | 1783444848443916288 |
---|---|
author | Rotermund, David Pawelzik, Klaus R. |
author_facet | Rotermund, David Pawelzik, Klaus R. |
author_sort | Rotermund, David |
collection | PubMed |
description | Artificial neural networks (ANNs) are important building blocks in technical applications. They rely on noiseless continuous signals in stark contrast to the discrete action potentials stochastically exchanged among the neurons in real brains. We propose to bridge this gap with Spike-by-Spike (SbS) networks which represent a compromise between non-spiking and spiking versions of generative models. What is missing, however, are algorithms for finding weight sets that would optimize the output performances of deep SbS networks with many layers. Here, a learning rule for feed-forward SbS networks is derived. The properties of this approach are investigated and its functionality is demonstrated by simulations. In particular, a Deep Convolutional SbS network for classifying handwritten digits achieves a classification performance of roughly 99.3% on the MNIST test data when the learning rule is applied together with an optimizer. Thereby it approaches the benchmark results of ANNs without extensive parameter optimization. We envision this learning rule for SBS networks to provide a new basis for research in neuroscience and for technical applications, especially when they become implemented on specialized computational hardware. |
format | Online Article Text |
id | pubmed-6700320 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2019 |
publisher | Frontiers Media S.A. |
record_format | MEDLINE/PubMed |
spelling | pubmed-67003202019-08-27 Back-Propagation Learning in Deep Spike-By-Spike Networks Rotermund, David Pawelzik, Klaus R. Front Comput Neurosci Neuroscience Artificial neural networks (ANNs) are important building blocks in technical applications. They rely on noiseless continuous signals in stark contrast to the discrete action potentials stochastically exchanged among the neurons in real brains. We propose to bridge this gap with Spike-by-Spike (SbS) networks which represent a compromise between non-spiking and spiking versions of generative models. What is missing, however, are algorithms for finding weight sets that would optimize the output performances of deep SbS networks with many layers. Here, a learning rule for feed-forward SbS networks is derived. The properties of this approach are investigated and its functionality is demonstrated by simulations. In particular, a Deep Convolutional SbS network for classifying handwritten digits achieves a classification performance of roughly 99.3% on the MNIST test data when the learning rule is applied together with an optimizer. Thereby it approaches the benchmark results of ANNs without extensive parameter optimization. We envision this learning rule for SBS networks to provide a new basis for research in neuroscience and for technical applications, especially when they become implemented on specialized computational hardware. Frontiers Media S.A. 2019-08-13 /pmc/articles/PMC6700320/ /pubmed/31456677 http://dx.doi.org/10.3389/fncom.2019.00055 Text en Copyright © 2019 Rotermund and Pawelzik. http://creativecommons.org/licenses/by/4.0/ This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms. |
spellingShingle | Neuroscience Rotermund, David Pawelzik, Klaus R. Back-Propagation Learning in Deep Spike-By-Spike Networks |
title | Back-Propagation Learning in Deep Spike-By-Spike Networks |
title_full | Back-Propagation Learning in Deep Spike-By-Spike Networks |
title_fullStr | Back-Propagation Learning in Deep Spike-By-Spike Networks |
title_full_unstemmed | Back-Propagation Learning in Deep Spike-By-Spike Networks |
title_short | Back-Propagation Learning in Deep Spike-By-Spike Networks |
title_sort | back-propagation learning in deep spike-by-spike networks |
topic | Neuroscience |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6700320/ https://www.ncbi.nlm.nih.gov/pubmed/31456677 http://dx.doi.org/10.3389/fncom.2019.00055 |
work_keys_str_mv | AT rotermunddavid backpropagationlearningindeepspikebyspikenetworks AT pawelzikklausr backpropagationlearningindeepspikebyspikenetworks |