Cargando…
A Soft-Pruning Method Applied During Training of Spiking Neural Networks for In-memory Computing Applications
Inspired from the computational efficiency of the biological brain, spiking neural networks (SNNs) emulate biological neural networks, neural codes, dynamics, and circuitry. SNNs show great potential for the implementation of unsupervised learning using in-memory computing. Here, we report an algori...
Autores principales: | , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Frontiers Media S.A.
2019
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6497807/ https://www.ncbi.nlm.nih.gov/pubmed/31080402 http://dx.doi.org/10.3389/fnins.2019.00405 |
_version_ | 1783415536381591552 |
---|---|
author | Shi, Yuhan Nguyen, Leon Oh, Sangheon Liu, Xin Kuzum, Duygu |
author_facet | Shi, Yuhan Nguyen, Leon Oh, Sangheon Liu, Xin Kuzum, Duygu |
author_sort | Shi, Yuhan |
collection | PubMed |
description | Inspired from the computational efficiency of the biological brain, spiking neural networks (SNNs) emulate biological neural networks, neural codes, dynamics, and circuitry. SNNs show great potential for the implementation of unsupervised learning using in-memory computing. Here, we report an algorithmic optimization that improves energy efficiency of online learning with SNNs on emerging non-volatile memory (eNVM) devices. We develop a pruning method for SNNs by exploiting the output firing characteristics of neurons. Our pruning method can be applied during network training, which is different from previous approaches in the literature that employ pruning on already-trained networks. This approach prevents unnecessary updates of network parameters during training. This algorithmic optimization can complement the energy efficiency of eNVM technology, which offers a unique in-memory computing platform for the parallelization of neural network operations. Our SNN maintains ~90% classification accuracy on the MNIST dataset with up to ~75% pruning, significantly reducing the number of weight updates. The SNN and pruning scheme developed in this work can pave the way toward applications of eNVM based neuro-inspired systems for energy efficient online learning in low power applications. |
format | Online Article Text |
id | pubmed-6497807 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2019 |
publisher | Frontiers Media S.A. |
record_format | MEDLINE/PubMed |
spelling | pubmed-64978072019-05-10 A Soft-Pruning Method Applied During Training of Spiking Neural Networks for In-memory Computing Applications Shi, Yuhan Nguyen, Leon Oh, Sangheon Liu, Xin Kuzum, Duygu Front Neurosci Neuroscience Inspired from the computational efficiency of the biological brain, spiking neural networks (SNNs) emulate biological neural networks, neural codes, dynamics, and circuitry. SNNs show great potential for the implementation of unsupervised learning using in-memory computing. Here, we report an algorithmic optimization that improves energy efficiency of online learning with SNNs on emerging non-volatile memory (eNVM) devices. We develop a pruning method for SNNs by exploiting the output firing characteristics of neurons. Our pruning method can be applied during network training, which is different from previous approaches in the literature that employ pruning on already-trained networks. This approach prevents unnecessary updates of network parameters during training. This algorithmic optimization can complement the energy efficiency of eNVM technology, which offers a unique in-memory computing platform for the parallelization of neural network operations. Our SNN maintains ~90% classification accuracy on the MNIST dataset with up to ~75% pruning, significantly reducing the number of weight updates. The SNN and pruning scheme developed in this work can pave the way toward applications of eNVM based neuro-inspired systems for energy efficient online learning in low power applications. Frontiers Media S.A. 2019-04-26 /pmc/articles/PMC6497807/ /pubmed/31080402 http://dx.doi.org/10.3389/fnins.2019.00405 Text en Copyright © 2019 Shi, Nguyen, Oh, Liu and Kuzum. http://creativecommons.org/licenses/by/4.0/ This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms. |
spellingShingle | Neuroscience Shi, Yuhan Nguyen, Leon Oh, Sangheon Liu, Xin Kuzum, Duygu A Soft-Pruning Method Applied During Training of Spiking Neural Networks for In-memory Computing Applications |
title | A Soft-Pruning Method Applied During Training of Spiking Neural Networks for In-memory Computing Applications |
title_full | A Soft-Pruning Method Applied During Training of Spiking Neural Networks for In-memory Computing Applications |
title_fullStr | A Soft-Pruning Method Applied During Training of Spiking Neural Networks for In-memory Computing Applications |
title_full_unstemmed | A Soft-Pruning Method Applied During Training of Spiking Neural Networks for In-memory Computing Applications |
title_short | A Soft-Pruning Method Applied During Training of Spiking Neural Networks for In-memory Computing Applications |
title_sort | soft-pruning method applied during training of spiking neural networks for in-memory computing applications |
topic | Neuroscience |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6497807/ https://www.ncbi.nlm.nih.gov/pubmed/31080402 http://dx.doi.org/10.3389/fnins.2019.00405 |
work_keys_str_mv | AT shiyuhan asoftpruningmethodappliedduringtrainingofspikingneuralnetworksforinmemorycomputingapplications AT nguyenleon asoftpruningmethodappliedduringtrainingofspikingneuralnetworksforinmemorycomputingapplications AT ohsangheon asoftpruningmethodappliedduringtrainingofspikingneuralnetworksforinmemorycomputingapplications AT liuxin asoftpruningmethodappliedduringtrainingofspikingneuralnetworksforinmemorycomputingapplications AT kuzumduygu asoftpruningmethodappliedduringtrainingofspikingneuralnetworksforinmemorycomputingapplications AT shiyuhan softpruningmethodappliedduringtrainingofspikingneuralnetworksforinmemorycomputingapplications AT nguyenleon softpruningmethodappliedduringtrainingofspikingneuralnetworksforinmemorycomputingapplications AT ohsangheon softpruningmethodappliedduringtrainingofspikingneuralnetworksforinmemorycomputingapplications AT liuxin softpruningmethodappliedduringtrainingofspikingneuralnetworksforinmemorycomputingapplications AT kuzumduygu softpruningmethodappliedduringtrainingofspikingneuralnetworksforinmemorycomputingapplications |