Cargando…
Conversion of Continuous-Valued Deep Networks to Efficient Event-Driven Networks for Image Classification
Spiking neural networks (SNNs) can potentially offer an efficient way of doing inference because the neurons in the networks are sparsely activated and computations are event-driven. Previous work showed that simple continuous-valued deep Convolutional Neural Networks (CNNs) can be converted into ac...
Autores principales: | , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Frontiers Media S.A.
2017
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5770641/ https://www.ncbi.nlm.nih.gov/pubmed/29375284 http://dx.doi.org/10.3389/fnins.2017.00682 |
_version_ | 1783293110761029632 |
---|---|
author | Rueckauer, Bodo Lungu, Iulia-Alexandra Hu, Yuhuang Pfeiffer, Michael Liu, Shih-Chii |
author_facet | Rueckauer, Bodo Lungu, Iulia-Alexandra Hu, Yuhuang Pfeiffer, Michael Liu, Shih-Chii |
author_sort | Rueckauer, Bodo |
collection | PubMed |
description | Spiking neural networks (SNNs) can potentially offer an efficient way of doing inference because the neurons in the networks are sparsely activated and computations are event-driven. Previous work showed that simple continuous-valued deep Convolutional Neural Networks (CNNs) can be converted into accurate spiking equivalents. These networks did not include certain common operations such as max-pooling, softmax, batch-normalization and Inception-modules. This paper presents spiking equivalents of these operations therefore allowing conversion of nearly arbitrary CNN architectures. We show conversion of popular CNN architectures, including VGG-16 and Inception-v3, into SNNs that produce the best results reported to date on MNIST, CIFAR-10 and the challenging ImageNet dataset. SNNs can trade off classification error rate against the number of available operations whereas deep continuous-valued neural networks require a fixed number of operations to achieve their classification error rate. From the examples of LeNet for MNIST and BinaryNet for CIFAR-10, we show that with an increase in error rate of a few percentage points, the SNNs can achieve more than 2x reductions in operations compared to the original CNNs. This highlights the potential of SNNs in particular when deployed on power-efficient neuromorphic spiking neuron chips, for use in embedded applications. |
format | Online Article Text |
id | pubmed-5770641 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2017 |
publisher | Frontiers Media S.A. |
record_format | MEDLINE/PubMed |
spelling | pubmed-57706412018-01-26 Conversion of Continuous-Valued Deep Networks to Efficient Event-Driven Networks for Image Classification Rueckauer, Bodo Lungu, Iulia-Alexandra Hu, Yuhuang Pfeiffer, Michael Liu, Shih-Chii Front Neurosci Neuroscience Spiking neural networks (SNNs) can potentially offer an efficient way of doing inference because the neurons in the networks are sparsely activated and computations are event-driven. Previous work showed that simple continuous-valued deep Convolutional Neural Networks (CNNs) can be converted into accurate spiking equivalents. These networks did not include certain common operations such as max-pooling, softmax, batch-normalization and Inception-modules. This paper presents spiking equivalents of these operations therefore allowing conversion of nearly arbitrary CNN architectures. We show conversion of popular CNN architectures, including VGG-16 and Inception-v3, into SNNs that produce the best results reported to date on MNIST, CIFAR-10 and the challenging ImageNet dataset. SNNs can trade off classification error rate against the number of available operations whereas deep continuous-valued neural networks require a fixed number of operations to achieve their classification error rate. From the examples of LeNet for MNIST and BinaryNet for CIFAR-10, we show that with an increase in error rate of a few percentage points, the SNNs can achieve more than 2x reductions in operations compared to the original CNNs. This highlights the potential of SNNs in particular when deployed on power-efficient neuromorphic spiking neuron chips, for use in embedded applications. Frontiers Media S.A. 2017-12-07 /pmc/articles/PMC5770641/ /pubmed/29375284 http://dx.doi.org/10.3389/fnins.2017.00682 Text en Copyright © 2017 Rueckauer, Lungu, Hu, Pfeiffer and Liu. http://creativecommons.org/licenses/by/4.0/ This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms. |
spellingShingle | Neuroscience Rueckauer, Bodo Lungu, Iulia-Alexandra Hu, Yuhuang Pfeiffer, Michael Liu, Shih-Chii Conversion of Continuous-Valued Deep Networks to Efficient Event-Driven Networks for Image Classification |
title | Conversion of Continuous-Valued Deep Networks to Efficient Event-Driven Networks for Image Classification |
title_full | Conversion of Continuous-Valued Deep Networks to Efficient Event-Driven Networks for Image Classification |
title_fullStr | Conversion of Continuous-Valued Deep Networks to Efficient Event-Driven Networks for Image Classification |
title_full_unstemmed | Conversion of Continuous-Valued Deep Networks to Efficient Event-Driven Networks for Image Classification |
title_short | Conversion of Continuous-Valued Deep Networks to Efficient Event-Driven Networks for Image Classification |
title_sort | conversion of continuous-valued deep networks to efficient event-driven networks for image classification |
topic | Neuroscience |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5770641/ https://www.ncbi.nlm.nih.gov/pubmed/29375284 http://dx.doi.org/10.3389/fnins.2017.00682 |
work_keys_str_mv | AT rueckauerbodo conversionofcontinuousvalueddeepnetworkstoefficienteventdrivennetworksforimageclassification AT lunguiuliaalexandra conversionofcontinuousvalueddeepnetworkstoefficienteventdrivennetworksforimageclassification AT huyuhuang conversionofcontinuousvalueddeepnetworkstoefficienteventdrivennetworksforimageclassification AT pfeiffermichael conversionofcontinuousvalueddeepnetworkstoefficienteventdrivennetworksforimageclassification AT liushihchii conversionofcontinuousvalueddeepnetworkstoefficienteventdrivennetworksforimageclassification |