Cargando…

Sharing leaky-integrate-and-fire neurons for memory-efficient spiking neural networks

Spiking Neural Networks (SNNs) have gained increasing attention as energy-efficient neural networks owing to their binary and asynchronous computation. However, their non-linear activation, that is Leaky-Integrate-and-Fire (LIF) neuron, requires additional memory to store a membrane voltage to captu...

Descripción completa

Detalles Bibliográficos
Autores principales: Kim, Youngeun, Li, Yuhang, Moitra, Abhishek, Yin, Ruokai, Panda, Priyadarshini
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Frontiers Media S.A. 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10423932/
https://www.ncbi.nlm.nih.gov/pubmed/37583415
http://dx.doi.org/10.3389/fnins.2023.1230002
_version_ 1785089563874557952
author Kim, Youngeun
Li, Yuhang
Moitra, Abhishek
Yin, Ruokai
Panda, Priyadarshini
author_facet Kim, Youngeun
Li, Yuhang
Moitra, Abhishek
Yin, Ruokai
Panda, Priyadarshini
author_sort Kim, Youngeun
collection PubMed
description Spiking Neural Networks (SNNs) have gained increasing attention as energy-efficient neural networks owing to their binary and asynchronous computation. However, their non-linear activation, that is Leaky-Integrate-and-Fire (LIF) neuron, requires additional memory to store a membrane voltage to capture the temporal dynamics of spikes. Although the required memory cost for LIF neurons significantly increases as the input dimension goes larger, a technique to reduce memory for LIF neurons has not been explored so far. To address this, we propose a simple and effective solution, EfficientLIF-Net, which shares the LIF neurons across different layers and channels. Our EfficientLIF-Net achieves comparable accuracy with the standard SNNs while bringing up to ~4.3× forward memory efficiency and ~21.9× backward memory efficiency for LIF neurons. We conduct experiments on various datasets including CIFAR10, CIFAR100, TinyImageNet, ImageNet-100, and N-Caltech101. Furthermore, we show that our approach also offers advantages on Human Activity Recognition (HAR) datasets, which heavily rely on temporal information. The code has been released at https://github.com/Intelligent-Computing-Lab-Yale/EfficientLIF-Net.
format Online
Article
Text
id pubmed-10423932
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher Frontiers Media S.A.
record_format MEDLINE/PubMed
spelling pubmed-104239322023-08-15 Sharing leaky-integrate-and-fire neurons for memory-efficient spiking neural networks Kim, Youngeun Li, Yuhang Moitra, Abhishek Yin, Ruokai Panda, Priyadarshini Front Neurosci Neuroscience Spiking Neural Networks (SNNs) have gained increasing attention as energy-efficient neural networks owing to their binary and asynchronous computation. However, their non-linear activation, that is Leaky-Integrate-and-Fire (LIF) neuron, requires additional memory to store a membrane voltage to capture the temporal dynamics of spikes. Although the required memory cost for LIF neurons significantly increases as the input dimension goes larger, a technique to reduce memory for LIF neurons has not been explored so far. To address this, we propose a simple and effective solution, EfficientLIF-Net, which shares the LIF neurons across different layers and channels. Our EfficientLIF-Net achieves comparable accuracy with the standard SNNs while bringing up to ~4.3× forward memory efficiency and ~21.9× backward memory efficiency for LIF neurons. We conduct experiments on various datasets including CIFAR10, CIFAR100, TinyImageNet, ImageNet-100, and N-Caltech101. Furthermore, we show that our approach also offers advantages on Human Activity Recognition (HAR) datasets, which heavily rely on temporal information. The code has been released at https://github.com/Intelligent-Computing-Lab-Yale/EfficientLIF-Net. Frontiers Media S.A. 2023-07-31 /pmc/articles/PMC10423932/ /pubmed/37583415 http://dx.doi.org/10.3389/fnins.2023.1230002 Text en Copyright © 2023 Kim, Li, Moitra, Yin and Panda. https://creativecommons.org/licenses/by/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
spellingShingle Neuroscience
Kim, Youngeun
Li, Yuhang
Moitra, Abhishek
Yin, Ruokai
Panda, Priyadarshini
Sharing leaky-integrate-and-fire neurons for memory-efficient spiking neural networks
title Sharing leaky-integrate-and-fire neurons for memory-efficient spiking neural networks
title_full Sharing leaky-integrate-and-fire neurons for memory-efficient spiking neural networks
title_fullStr Sharing leaky-integrate-and-fire neurons for memory-efficient spiking neural networks
title_full_unstemmed Sharing leaky-integrate-and-fire neurons for memory-efficient spiking neural networks
title_short Sharing leaky-integrate-and-fire neurons for memory-efficient spiking neural networks
title_sort sharing leaky-integrate-and-fire neurons for memory-efficient spiking neural networks
topic Neuroscience
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10423932/
https://www.ncbi.nlm.nih.gov/pubmed/37583415
http://dx.doi.org/10.3389/fnins.2023.1230002
work_keys_str_mv AT kimyoungeun sharingleakyintegrateandfireneuronsformemoryefficientspikingneuralnetworks
AT liyuhang sharingleakyintegrateandfireneuronsformemoryefficientspikingneuralnetworks
AT moitraabhishek sharingleakyintegrateandfireneuronsformemoryefficientspikingneuralnetworks
AT yinruokai sharingleakyintegrateandfireneuronsformemoryefficientspikingneuralnetworks
AT pandapriyadarshini sharingleakyintegrateandfireneuronsformemoryefficientspikingneuralnetworks