Cargando…
A Curiosity-Based Learning Method for Spiking Neural Networks
Spiking Neural Networks (SNNs) have shown favorable performance recently. Nonetheless, the time-consuming computation on neuron level and complex optimization limit their real-time application. Curiosity has shown great performance in brain learning, which helps biological brains grasp new knowledge...
Autores principales: | , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Frontiers Media S.A.
2020
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7020337/ https://www.ncbi.nlm.nih.gov/pubmed/32116621 http://dx.doi.org/10.3389/fncom.2020.00007 |
_version_ | 1783497723634253824 |
---|---|
author | Shi, Mengting Zhang, Tielin Zeng, Yi |
author_facet | Shi, Mengting Zhang, Tielin Zeng, Yi |
author_sort | Shi, Mengting |
collection | PubMed |
description | Spiking Neural Networks (SNNs) have shown favorable performance recently. Nonetheless, the time-consuming computation on neuron level and complex optimization limit their real-time application. Curiosity has shown great performance in brain learning, which helps biological brains grasp new knowledge efficiently and actively. Inspired by this leaning mechanism, we propose a curiosity-based SNN (CBSNN) model, which contains four main learning processes. Firstly, the network is trained with biologically plausible plasticity principles to get the novelty estimations of all samples in only one epoch; secondly, the CBSNN begins to repeatedly learn the samples whose novelty estimations exceed the novelty threshold and dynamically update the novelty estimations of samples according to the learning results in five epochs; thirdly, in order to avoid the overfitting of the novel samples and forgetting of the learned samples, CBSNN retrains all samples in one epoch; finally, step two and step three are periodically taken until network convergence. Compared with the state-of-the-art Voltage-driven Plasticity-centric SNN (VPSNN) under standard architecture, our model achieves a higher accuracy of 98.55% with only 54.95% of its computation cost on the MNIST hand-written digit recognition dataset. Similar conclusion can also be found out in other datasets, i.e., Iris, NETtalk, Fashion-MNIST, and CIFAR-10, respectively. More experiments and analysis further prove that such curiosity-based learning theory is helpful in improving the efficiency of SNNs. As far as we know, this is the first practical combination of the curiosity mechanism and SNN, and these improvements will make the realistic application of SNNs possible on more specific tasks within the von Neumann framework. |
format | Online Article Text |
id | pubmed-7020337 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2020 |
publisher | Frontiers Media S.A. |
record_format | MEDLINE/PubMed |
spelling | pubmed-70203372020-02-28 A Curiosity-Based Learning Method for Spiking Neural Networks Shi, Mengting Zhang, Tielin Zeng, Yi Front Comput Neurosci Neuroscience Spiking Neural Networks (SNNs) have shown favorable performance recently. Nonetheless, the time-consuming computation on neuron level and complex optimization limit their real-time application. Curiosity has shown great performance in brain learning, which helps biological brains grasp new knowledge efficiently and actively. Inspired by this leaning mechanism, we propose a curiosity-based SNN (CBSNN) model, which contains four main learning processes. Firstly, the network is trained with biologically plausible plasticity principles to get the novelty estimations of all samples in only one epoch; secondly, the CBSNN begins to repeatedly learn the samples whose novelty estimations exceed the novelty threshold and dynamically update the novelty estimations of samples according to the learning results in five epochs; thirdly, in order to avoid the overfitting of the novel samples and forgetting of the learned samples, CBSNN retrains all samples in one epoch; finally, step two and step three are periodically taken until network convergence. Compared with the state-of-the-art Voltage-driven Plasticity-centric SNN (VPSNN) under standard architecture, our model achieves a higher accuracy of 98.55% with only 54.95% of its computation cost on the MNIST hand-written digit recognition dataset. Similar conclusion can also be found out in other datasets, i.e., Iris, NETtalk, Fashion-MNIST, and CIFAR-10, respectively. More experiments and analysis further prove that such curiosity-based learning theory is helpful in improving the efficiency of SNNs. As far as we know, this is the first practical combination of the curiosity mechanism and SNN, and these improvements will make the realistic application of SNNs possible on more specific tasks within the von Neumann framework. Frontiers Media S.A. 2020-02-07 /pmc/articles/PMC7020337/ /pubmed/32116621 http://dx.doi.org/10.3389/fncom.2020.00007 Text en Copyright © 2020 Shi, Zhang and Zeng. http://creativecommons.org/licenses/by/4.0/ This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms. |
spellingShingle | Neuroscience Shi, Mengting Zhang, Tielin Zeng, Yi A Curiosity-Based Learning Method for Spiking Neural Networks |
title | A Curiosity-Based Learning Method for Spiking Neural Networks |
title_full | A Curiosity-Based Learning Method for Spiking Neural Networks |
title_fullStr | A Curiosity-Based Learning Method for Spiking Neural Networks |
title_full_unstemmed | A Curiosity-Based Learning Method for Spiking Neural Networks |
title_short | A Curiosity-Based Learning Method for Spiking Neural Networks |
title_sort | curiosity-based learning method for spiking neural networks |
topic | Neuroscience |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7020337/ https://www.ncbi.nlm.nih.gov/pubmed/32116621 http://dx.doi.org/10.3389/fncom.2020.00007 |
work_keys_str_mv | AT shimengting acuriositybasedlearningmethodforspikingneuralnetworks AT zhangtielin acuriositybasedlearningmethodforspikingneuralnetworks AT zengyi acuriositybasedlearningmethodforspikingneuralnetworks AT shimengting curiositybasedlearningmethodforspikingneuralnetworks AT zhangtielin curiositybasedlearningmethodforspikingneuralnetworks AT zengyi curiositybasedlearningmethodforspikingneuralnetworks |