Cargando…

SSTDP: Supervised Spike Timing Dependent Plasticity for Efficient Spiking Neural Network Training

Spiking Neural Networks (SNNs) are a pathway that could potentially empower low-power event-driven neuromorphic hardware due to their spatio-temporal information processing capability and high biological plausibility. Although SNNs are currently more efficient than artificial neural networks (ANNs),...

Descripción completa

Detalles Bibliográficos
Autores principales: Liu, Fangxin, Zhao, Wenbo, Chen, Yongbiao, Wang, Zongwu, Yang, Tao, Jiang, Li
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Frontiers Media S.A. 2021
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8603828/
https://www.ncbi.nlm.nih.gov/pubmed/34803591
http://dx.doi.org/10.3389/fnins.2021.756876
_version_ 1784601835847286784
author Liu, Fangxin
Zhao, Wenbo
Chen, Yongbiao
Wang, Zongwu
Yang, Tao
Jiang, Li
author_facet Liu, Fangxin
Zhao, Wenbo
Chen, Yongbiao
Wang, Zongwu
Yang, Tao
Jiang, Li
author_sort Liu, Fangxin
collection PubMed
description Spiking Neural Networks (SNNs) are a pathway that could potentially empower low-power event-driven neuromorphic hardware due to their spatio-temporal information processing capability and high biological plausibility. Although SNNs are currently more efficient than artificial neural networks (ANNs), they are not as accurate as ANNs. Error backpropagation is the most common method for directly training neural networks, promoting the prosperity of ANNs in various deep learning fields. However, since the signals transmitted in the SNN are non-differentiable discrete binary spike events, the activation function in the form of spikes presents difficulties for the gradient-based optimization algorithms to be directly applied in SNNs, leading to a performance gap (i.e., accuracy and latency) between SNNs and ANNs. This paper introduces a new learning algorithm, called SSTDP, which bridges the gap between backpropagation (BP)-based learning and spike-time-dependent plasticity (STDP)-based learning to train SNNs efficiently. The scheme incorporates the global optimization process from BP and the efficient weight update derived from STDP. It not only avoids the non-differentiable derivation in the BP process but also utilizes the local feature extraction property of STDP. Consequently, our method can lower the possibility of vanishing spikes in BP training and reduce the number of time steps to reduce network latency. In SSTDP, we employ temporal-based coding and use Integrate-and-Fire (IF) neuron as the neuron model to provide considerable computational benefits. Our experiments show the effectiveness of the proposed SSTDP learning algorithm on the SNN by achieving the best classification accuracy 99.3% on the Caltech 101 dataset, 98.1% on the MNIST dataset, and 91.3% on the CIFAR-10 dataset compared to other SNNs trained with other learning methods. It also surpasses the best inference accuracy of the directly trained SNN with 25~32× less inference latency. Moreover, we analyze event-based computations to demonstrate the efficacy of the SNN for inference operation in the spiking domain, and SSTDP methods can achieve 1.3~37.7× fewer addition operations per inference. The code is available at: https://github.com/MXHX7199/SNN-SSTDP.
format Online
Article
Text
id pubmed-8603828
institution National Center for Biotechnology Information
language English
publishDate 2021
publisher Frontiers Media S.A.
record_format MEDLINE/PubMed
spelling pubmed-86038282021-11-20 SSTDP: Supervised Spike Timing Dependent Plasticity for Efficient Spiking Neural Network Training Liu, Fangxin Zhao, Wenbo Chen, Yongbiao Wang, Zongwu Yang, Tao Jiang, Li Front Neurosci Neuroscience Spiking Neural Networks (SNNs) are a pathway that could potentially empower low-power event-driven neuromorphic hardware due to their spatio-temporal information processing capability and high biological plausibility. Although SNNs are currently more efficient than artificial neural networks (ANNs), they are not as accurate as ANNs. Error backpropagation is the most common method for directly training neural networks, promoting the prosperity of ANNs in various deep learning fields. However, since the signals transmitted in the SNN are non-differentiable discrete binary spike events, the activation function in the form of spikes presents difficulties for the gradient-based optimization algorithms to be directly applied in SNNs, leading to a performance gap (i.e., accuracy and latency) between SNNs and ANNs. This paper introduces a new learning algorithm, called SSTDP, which bridges the gap between backpropagation (BP)-based learning and spike-time-dependent plasticity (STDP)-based learning to train SNNs efficiently. The scheme incorporates the global optimization process from BP and the efficient weight update derived from STDP. It not only avoids the non-differentiable derivation in the BP process but also utilizes the local feature extraction property of STDP. Consequently, our method can lower the possibility of vanishing spikes in BP training and reduce the number of time steps to reduce network latency. In SSTDP, we employ temporal-based coding and use Integrate-and-Fire (IF) neuron as the neuron model to provide considerable computational benefits. Our experiments show the effectiveness of the proposed SSTDP learning algorithm on the SNN by achieving the best classification accuracy 99.3% on the Caltech 101 dataset, 98.1% on the MNIST dataset, and 91.3% on the CIFAR-10 dataset compared to other SNNs trained with other learning methods. It also surpasses the best inference accuracy of the directly trained SNN with 25~32× less inference latency. Moreover, we analyze event-based computations to demonstrate the efficacy of the SNN for inference operation in the spiking domain, and SSTDP methods can achieve 1.3~37.7× fewer addition operations per inference. The code is available at: https://github.com/MXHX7199/SNN-SSTDP. Frontiers Media S.A. 2021-11-04 /pmc/articles/PMC8603828/ /pubmed/34803591 http://dx.doi.org/10.3389/fnins.2021.756876 Text en Copyright © 2021 Liu, Zhao, Chen, Wang, Yang and Jiang. https://creativecommons.org/licenses/by/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
spellingShingle Neuroscience
Liu, Fangxin
Zhao, Wenbo
Chen, Yongbiao
Wang, Zongwu
Yang, Tao
Jiang, Li
SSTDP: Supervised Spike Timing Dependent Plasticity for Efficient Spiking Neural Network Training
title SSTDP: Supervised Spike Timing Dependent Plasticity for Efficient Spiking Neural Network Training
title_full SSTDP: Supervised Spike Timing Dependent Plasticity for Efficient Spiking Neural Network Training
title_fullStr SSTDP: Supervised Spike Timing Dependent Plasticity for Efficient Spiking Neural Network Training
title_full_unstemmed SSTDP: Supervised Spike Timing Dependent Plasticity for Efficient Spiking Neural Network Training
title_short SSTDP: Supervised Spike Timing Dependent Plasticity for Efficient Spiking Neural Network Training
title_sort sstdp: supervised spike timing dependent plasticity for efficient spiking neural network training
topic Neuroscience
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8603828/
https://www.ncbi.nlm.nih.gov/pubmed/34803591
http://dx.doi.org/10.3389/fnins.2021.756876
work_keys_str_mv AT liufangxin sstdpsupervisedspiketimingdependentplasticityforefficientspikingneuralnetworktraining
AT zhaowenbo sstdpsupervisedspiketimingdependentplasticityforefficientspikingneuralnetworktraining
AT chenyongbiao sstdpsupervisedspiketimingdependentplasticityforefficientspikingneuralnetworktraining
AT wangzongwu sstdpsupervisedspiketimingdependentplasticityforefficientspikingneuralnetworktraining
AT yangtao sstdpsupervisedspiketimingdependentplasticityforefficientspikingneuralnetworktraining
AT jiangli sstdpsupervisedspiketimingdependentplasticityforefficientspikingneuralnetworktraining