Cargando…

Spike-Train Level Direct Feedback Alignment: Sidestepping Backpropagation for On-Chip Training of Spiking Neural Nets

Spiking neural networks (SNNs) present a promising computing model and enable bio-plausible information processing and event-driven based ultra-low power neuromorphic hardware. However, training SNNs to reach the same performances of conventional deep artificial neural networks (ANNs), particularly...

Descripción completa

Detalles Bibliográficos
Autores principales: Lee, Jeongjun, Zhang, Renqian, Zhang, Wenrui, Liu, Yu, Li, Peng
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Frontiers Media S.A. 2020
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7082320/
https://www.ncbi.nlm.nih.gov/pubmed/32231513
http://dx.doi.org/10.3389/fnins.2020.00143
_version_ 1783508323929161728
author Lee, Jeongjun
Zhang, Renqian
Zhang, Wenrui
Liu, Yu
Li, Peng
author_facet Lee, Jeongjun
Zhang, Renqian
Zhang, Wenrui
Liu, Yu
Li, Peng
author_sort Lee, Jeongjun
collection PubMed
description Spiking neural networks (SNNs) present a promising computing model and enable bio-plausible information processing and event-driven based ultra-low power neuromorphic hardware. However, training SNNs to reach the same performances of conventional deep artificial neural networks (ANNs), particularly with error backpropagation (BP) algorithms, poses a significant challenge due to inherent complex dynamics and non-differentiable spike activities of spiking neurons. In this paper, we present the first study on realizing competitive spike-train level backpropagation (BP) like algorithms to enable on-chip training of SNNs. We propose a novel spike-train level direct feedback alignment (ST-DFA) algorithm, which is much more bio-plausible and hardware friendly than BP. Algorithm and hardware co-optimization and efficient online neural signal computation are explored for on-chip implementation of ST-DFA. On the Xilinx ZC706 FPGA board, the proposed hardware-efficient ST-DFA shows excellent performance vs. overhead tradeoffs for real-world speech and image classification applications. SNN neural processors with on-chip ST-DFA training show competitive classification accuracy of 96.27% for the MNIST dataset with 4× input resolution reduction and 84.88% for the challenging 16-speaker TI46 speech corpus, respectively. Compared to the hardware implementation of the state-of-the-art BP algorithm HM2-BP, the design of the proposed ST-DFA reduces functional resources by 76.7% and backward training latency by 31.6% while gracefully trading off classification performance.
format Online
Article
Text
id pubmed-7082320
institution National Center for Biotechnology Information
language English
publishDate 2020
publisher Frontiers Media S.A.
record_format MEDLINE/PubMed
spelling pubmed-70823202020-03-30 Spike-Train Level Direct Feedback Alignment: Sidestepping Backpropagation for On-Chip Training of Spiking Neural Nets Lee, Jeongjun Zhang, Renqian Zhang, Wenrui Liu, Yu Li, Peng Front Neurosci Neuroscience Spiking neural networks (SNNs) present a promising computing model and enable bio-plausible information processing and event-driven based ultra-low power neuromorphic hardware. However, training SNNs to reach the same performances of conventional deep artificial neural networks (ANNs), particularly with error backpropagation (BP) algorithms, poses a significant challenge due to inherent complex dynamics and non-differentiable spike activities of spiking neurons. In this paper, we present the first study on realizing competitive spike-train level backpropagation (BP) like algorithms to enable on-chip training of SNNs. We propose a novel spike-train level direct feedback alignment (ST-DFA) algorithm, which is much more bio-plausible and hardware friendly than BP. Algorithm and hardware co-optimization and efficient online neural signal computation are explored for on-chip implementation of ST-DFA. On the Xilinx ZC706 FPGA board, the proposed hardware-efficient ST-DFA shows excellent performance vs. overhead tradeoffs for real-world speech and image classification applications. SNN neural processors with on-chip ST-DFA training show competitive classification accuracy of 96.27% for the MNIST dataset with 4× input resolution reduction and 84.88% for the challenging 16-speaker TI46 speech corpus, respectively. Compared to the hardware implementation of the state-of-the-art BP algorithm HM2-BP, the design of the proposed ST-DFA reduces functional resources by 76.7% and backward training latency by 31.6% while gracefully trading off classification performance. Frontiers Media S.A. 2020-03-13 /pmc/articles/PMC7082320/ /pubmed/32231513 http://dx.doi.org/10.3389/fnins.2020.00143 Text en Copyright © 2020 Lee, Zhang, Zhang, Liu and Li. http://creativecommons.org/licenses/by/4.0/ This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
spellingShingle Neuroscience
Lee, Jeongjun
Zhang, Renqian
Zhang, Wenrui
Liu, Yu
Li, Peng
Spike-Train Level Direct Feedback Alignment: Sidestepping Backpropagation for On-Chip Training of Spiking Neural Nets
title Spike-Train Level Direct Feedback Alignment: Sidestepping Backpropagation for On-Chip Training of Spiking Neural Nets
title_full Spike-Train Level Direct Feedback Alignment: Sidestepping Backpropagation for On-Chip Training of Spiking Neural Nets
title_fullStr Spike-Train Level Direct Feedback Alignment: Sidestepping Backpropagation for On-Chip Training of Spiking Neural Nets
title_full_unstemmed Spike-Train Level Direct Feedback Alignment: Sidestepping Backpropagation for On-Chip Training of Spiking Neural Nets
title_short Spike-Train Level Direct Feedback Alignment: Sidestepping Backpropagation for On-Chip Training of Spiking Neural Nets
title_sort spike-train level direct feedback alignment: sidestepping backpropagation for on-chip training of spiking neural nets
topic Neuroscience
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7082320/
https://www.ncbi.nlm.nih.gov/pubmed/32231513
http://dx.doi.org/10.3389/fnins.2020.00143
work_keys_str_mv AT leejeongjun spiketrainleveldirectfeedbackalignmentsidesteppingbackpropagationforonchiptrainingofspikingneuralnets
AT zhangrenqian spiketrainleveldirectfeedbackalignmentsidesteppingbackpropagationforonchiptrainingofspikingneuralnets
AT zhangwenrui spiketrainleveldirectfeedbackalignmentsidesteppingbackpropagationforonchiptrainingofspikingneuralnets
AT liuyu spiketrainleveldirectfeedbackalignmentsidesteppingbackpropagationforonchiptrainingofspikingneuralnets
AT lipeng spiketrainleveldirectfeedbackalignmentsidesteppingbackpropagationforonchiptrainingofspikingneuralnets