Cargando…
Trainable quantization for Speedy Spiking Neural Networks
Spiking neural networks are considered as the third generation of Artificial Neural Networks. SNNs perform computation using neurons and synapses that communicate using binary and asynchronous signals known as spikes. They have attracted significant research interest over the last years since their...
Autores principales: | , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Frontiers Media S.A.
2023
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10020579/ https://www.ncbi.nlm.nih.gov/pubmed/36937675 http://dx.doi.org/10.3389/fnins.2023.1154241 |
_version_ | 1784908290198601728 |
---|---|
author | Castagnetti, Andrea Pegatoquet, Alain Miramond, Benoît |
author_facet | Castagnetti, Andrea Pegatoquet, Alain Miramond, Benoît |
author_sort | Castagnetti, Andrea |
collection | PubMed |
description | Spiking neural networks are considered as the third generation of Artificial Neural Networks. SNNs perform computation using neurons and synapses that communicate using binary and asynchronous signals known as spikes. They have attracted significant research interest over the last years since their computing paradigm allows theoretically sparse and low-power operations. This hypothetical gain, used from the beginning of the neuromorphic research, was however limited by three main factors: the absence of an efficient learning rule competing with the one of classical deep learning, the lack of mature learning framework, and an important data processing latency finally generating energy overhead. While the first two limitations have recently been addressed in the literature, the major problem of latency is not solved yet. Indeed, information is not exchanged instantaneously between spiking neurons but gradually builds up over time as spikes are generated and propagated through the network. This paper focuses on quantization error, one of the main consequence of the SNN discrete representation of information. We argue that the quantization error is the main source of accuracy drop between ANN and SNN. In this article we propose an in-depth characterization of SNN quantization noise. We then propose a end-to-end direct learning approach based on a new trainable spiking neural model. This model allows adapting the threshold of neurons during training and implements efficient quantization strategies. This novel approach better explains the global behavior of SNNs and minimizes the quantization noise during training. The resulting SNN can be trained over a limited amount of timesteps, reducing latency, while beating state of the art accuracy and preserving high sparsity on the main datasets considered in the neuromorphic community. |
format | Online Article Text |
id | pubmed-10020579 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2023 |
publisher | Frontiers Media S.A. |
record_format | MEDLINE/PubMed |
spelling | pubmed-100205792023-03-18 Trainable quantization for Speedy Spiking Neural Networks Castagnetti, Andrea Pegatoquet, Alain Miramond, Benoît Front Neurosci Neuroscience Spiking neural networks are considered as the third generation of Artificial Neural Networks. SNNs perform computation using neurons and synapses that communicate using binary and asynchronous signals known as spikes. They have attracted significant research interest over the last years since their computing paradigm allows theoretically sparse and low-power operations. This hypothetical gain, used from the beginning of the neuromorphic research, was however limited by three main factors: the absence of an efficient learning rule competing with the one of classical deep learning, the lack of mature learning framework, and an important data processing latency finally generating energy overhead. While the first two limitations have recently been addressed in the literature, the major problem of latency is not solved yet. Indeed, information is not exchanged instantaneously between spiking neurons but gradually builds up over time as spikes are generated and propagated through the network. This paper focuses on quantization error, one of the main consequence of the SNN discrete representation of information. We argue that the quantization error is the main source of accuracy drop between ANN and SNN. In this article we propose an in-depth characterization of SNN quantization noise. We then propose a end-to-end direct learning approach based on a new trainable spiking neural model. This model allows adapting the threshold of neurons during training and implements efficient quantization strategies. This novel approach better explains the global behavior of SNNs and minimizes the quantization noise during training. The resulting SNN can be trained over a limited amount of timesteps, reducing latency, while beating state of the art accuracy and preserving high sparsity on the main datasets considered in the neuromorphic community. Frontiers Media S.A. 2023-03-03 /pmc/articles/PMC10020579/ /pubmed/36937675 http://dx.doi.org/10.3389/fnins.2023.1154241 Text en Copyright © 2023 Castagnetti, Pegatoquet and Miramond. https://creativecommons.org/licenses/by/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms. |
spellingShingle | Neuroscience Castagnetti, Andrea Pegatoquet, Alain Miramond, Benoît Trainable quantization for Speedy Spiking Neural Networks |
title | Trainable quantization for Speedy Spiking Neural Networks |
title_full | Trainable quantization for Speedy Spiking Neural Networks |
title_fullStr | Trainable quantization for Speedy Spiking Neural Networks |
title_full_unstemmed | Trainable quantization for Speedy Spiking Neural Networks |
title_short | Trainable quantization for Speedy Spiking Neural Networks |
title_sort | trainable quantization for speedy spiking neural networks |
topic | Neuroscience |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10020579/ https://www.ncbi.nlm.nih.gov/pubmed/36937675 http://dx.doi.org/10.3389/fnins.2023.1154241 |
work_keys_str_mv | AT castagnettiandrea trainablequantizationforspeedyspikingneuralnetworks AT pegatoquetalain trainablequantizationforspeedyspikingneuralnetworks AT miramondbenoit trainablequantizationforspeedyspikingneuralnetworks |