Cargando…

Training Deep Spiking Convolutional Neural Networks With STDP-Based Unsupervised Pre-training Followed by Supervised Fine-Tuning

Spiking Neural Networks (SNNs) are fast becoming a promising candidate for brain-inspired neuromorphic computing because of their inherent power efficiency and impressive inference accuracy across several cognitive tasks such as image classification and speech recognition. The recent efforts in SNNs...

Descripción completa

Detalles Bibliográficos
Autores principales: Lee, Chankyu, Panda, Priyadarshini, Srinivasan, Gopalakrishnan, Roy, Kaushik
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Frontiers Media S.A. 2018
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6085488/
https://www.ncbi.nlm.nih.gov/pubmed/30123103
http://dx.doi.org/10.3389/fnins.2018.00435
_version_ 1783346338870591488
author Lee, Chankyu
Panda, Priyadarshini
Srinivasan, Gopalakrishnan
Roy, Kaushik
author_facet Lee, Chankyu
Panda, Priyadarshini
Srinivasan, Gopalakrishnan
Roy, Kaushik
author_sort Lee, Chankyu
collection PubMed
description Spiking Neural Networks (SNNs) are fast becoming a promising candidate for brain-inspired neuromorphic computing because of their inherent power efficiency and impressive inference accuracy across several cognitive tasks such as image classification and speech recognition. The recent efforts in SNNs have been focused on implementing deeper networks with multiple hidden layers to incorporate exponentially more difficult functional representations. In this paper, we propose a pre-training scheme using biologically plausible unsupervised learning, namely Spike-Timing-Dependent-Plasticity (STDP), in order to better initialize the parameters in multi-layer systems prior to supervised optimization. The multi-layer SNN is comprised of alternating convolutional and pooling layers followed by fully-connected layers, which are populated with leaky integrate-and-fire spiking neurons. We train the deep SNNs in two phases wherein, first, convolutional kernels are pre-trained in a layer-wise manner with unsupervised learning followed by fine-tuning the synaptic weights with spike-based supervised gradient descent backpropagation. Our experiments on digit recognition demonstrate that the STDP-based pre-training with gradient-based optimization provides improved robustness, faster (~2.5 ×) training time and better generalization compared with purely gradient-based training without pre-training.
format Online
Article
Text
id pubmed-6085488
institution National Center for Biotechnology Information
language English
publishDate 2018
publisher Frontiers Media S.A.
record_format MEDLINE/PubMed
spelling pubmed-60854882018-08-17 Training Deep Spiking Convolutional Neural Networks With STDP-Based Unsupervised Pre-training Followed by Supervised Fine-Tuning Lee, Chankyu Panda, Priyadarshini Srinivasan, Gopalakrishnan Roy, Kaushik Front Neurosci Neuroscience Spiking Neural Networks (SNNs) are fast becoming a promising candidate for brain-inspired neuromorphic computing because of their inherent power efficiency and impressive inference accuracy across several cognitive tasks such as image classification and speech recognition. The recent efforts in SNNs have been focused on implementing deeper networks with multiple hidden layers to incorporate exponentially more difficult functional representations. In this paper, we propose a pre-training scheme using biologically plausible unsupervised learning, namely Spike-Timing-Dependent-Plasticity (STDP), in order to better initialize the parameters in multi-layer systems prior to supervised optimization. The multi-layer SNN is comprised of alternating convolutional and pooling layers followed by fully-connected layers, which are populated with leaky integrate-and-fire spiking neurons. We train the deep SNNs in two phases wherein, first, convolutional kernels are pre-trained in a layer-wise manner with unsupervised learning followed by fine-tuning the synaptic weights with spike-based supervised gradient descent backpropagation. Our experiments on digit recognition demonstrate that the STDP-based pre-training with gradient-based optimization provides improved robustness, faster (~2.5 ×) training time and better generalization compared with purely gradient-based training without pre-training. Frontiers Media S.A. 2018-08-03 /pmc/articles/PMC6085488/ /pubmed/30123103 http://dx.doi.org/10.3389/fnins.2018.00435 Text en Copyright © 2018 Lee, Panda, Srinivasan and Roy. http://creativecommons.org/licenses/by/4.0/ This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
spellingShingle Neuroscience
Lee, Chankyu
Panda, Priyadarshini
Srinivasan, Gopalakrishnan
Roy, Kaushik
Training Deep Spiking Convolutional Neural Networks With STDP-Based Unsupervised Pre-training Followed by Supervised Fine-Tuning
title Training Deep Spiking Convolutional Neural Networks With STDP-Based Unsupervised Pre-training Followed by Supervised Fine-Tuning
title_full Training Deep Spiking Convolutional Neural Networks With STDP-Based Unsupervised Pre-training Followed by Supervised Fine-Tuning
title_fullStr Training Deep Spiking Convolutional Neural Networks With STDP-Based Unsupervised Pre-training Followed by Supervised Fine-Tuning
title_full_unstemmed Training Deep Spiking Convolutional Neural Networks With STDP-Based Unsupervised Pre-training Followed by Supervised Fine-Tuning
title_short Training Deep Spiking Convolutional Neural Networks With STDP-Based Unsupervised Pre-training Followed by Supervised Fine-Tuning
title_sort training deep spiking convolutional neural networks with stdp-based unsupervised pre-training followed by supervised fine-tuning
topic Neuroscience
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6085488/
https://www.ncbi.nlm.nih.gov/pubmed/30123103
http://dx.doi.org/10.3389/fnins.2018.00435
work_keys_str_mv AT leechankyu trainingdeepspikingconvolutionalneuralnetworkswithstdpbasedunsupervisedpretrainingfollowedbysupervisedfinetuning
AT pandapriyadarshini trainingdeepspikingconvolutionalneuralnetworkswithstdpbasedunsupervisedpretrainingfollowedbysupervisedfinetuning
AT srinivasangopalakrishnan trainingdeepspikingconvolutionalneuralnetworkswithstdpbasedunsupervisedpretrainingfollowedbysupervisedfinetuning
AT roykaushik trainingdeepspikingconvolutionalneuralnetworkswithstdpbasedunsupervisedpretrainingfollowedbysupervisedfinetuning