Cargando…

Memory-Efficient Deep Learning on a SpiNNaker 2 Prototype

The memory requirement of deep learning algorithms is considered incompatible with the memory restriction of energy-efficient hardware. A low memory footprint can be achieved by pruning obsolete connections or reducing the precision of connection strengths after the network has been trained. Yet, th...

Descripción completa

Detalles Bibliográficos
Autores principales: Liu, Chen, Bellec, Guillaume, Vogginger, Bernhard, Kappel, David, Partzsch, Johannes, Neumärker, Felix, Höppner, Sebastian, Maass, Wolfgang, Furber, Steve B., Legenstein, Robert, Mayr, Christian G.
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Frontiers Media S.A. 2018
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6250847/
https://www.ncbi.nlm.nih.gov/pubmed/30505263
http://dx.doi.org/10.3389/fnins.2018.00840
_version_ 1783372991691751424
author Liu, Chen
Bellec, Guillaume
Vogginger, Bernhard
Kappel, David
Partzsch, Johannes
Neumärker, Felix
Höppner, Sebastian
Maass, Wolfgang
Furber, Steve B.
Legenstein, Robert
Mayr, Christian G.
author_facet Liu, Chen
Bellec, Guillaume
Vogginger, Bernhard
Kappel, David
Partzsch, Johannes
Neumärker, Felix
Höppner, Sebastian
Maass, Wolfgang
Furber, Steve B.
Legenstein, Robert
Mayr, Christian G.
author_sort Liu, Chen
collection PubMed
description The memory requirement of deep learning algorithms is considered incompatible with the memory restriction of energy-efficient hardware. A low memory footprint can be achieved by pruning obsolete connections or reducing the precision of connection strengths after the network has been trained. Yet, these techniques are not applicable to the case when neural networks have to be trained directly on hardware due to the hard memory constraints. Deep Rewiring (DEEP R) is a training algorithm which continuously rewires the network while preserving very sparse connectivity all along the training procedure. We apply DEEP R to a deep neural network implementation on a prototype chip of the 2nd generation SpiNNaker system. The local memory of a single core on this chip is limited to 64 KB and a deep network architecture is trained entirely within this constraint without the use of external memory. Throughout training, the proportion of active connections is limited to 1.3%. On the handwritten digits dataset MNIST, this extremely sparse network achieves 96.6% classification accuracy at convergence. Utilizing the multi-processor feature of the SpiNNaker system, we found very good scaling in terms of computation time, per-core memory consumption, and energy constraints. When compared to a X86 CPU implementation, neural network training on the SpiNNaker 2 prototype improves power and energy consumption by two orders of magnitude.
format Online
Article
Text
id pubmed-6250847
institution National Center for Biotechnology Information
language English
publishDate 2018
publisher Frontiers Media S.A.
record_format MEDLINE/PubMed
spelling pubmed-62508472018-11-30 Memory-Efficient Deep Learning on a SpiNNaker 2 Prototype Liu, Chen Bellec, Guillaume Vogginger, Bernhard Kappel, David Partzsch, Johannes Neumärker, Felix Höppner, Sebastian Maass, Wolfgang Furber, Steve B. Legenstein, Robert Mayr, Christian G. Front Neurosci Neuroscience The memory requirement of deep learning algorithms is considered incompatible with the memory restriction of energy-efficient hardware. A low memory footprint can be achieved by pruning obsolete connections or reducing the precision of connection strengths after the network has been trained. Yet, these techniques are not applicable to the case when neural networks have to be trained directly on hardware due to the hard memory constraints. Deep Rewiring (DEEP R) is a training algorithm which continuously rewires the network while preserving very sparse connectivity all along the training procedure. We apply DEEP R to a deep neural network implementation on a prototype chip of the 2nd generation SpiNNaker system. The local memory of a single core on this chip is limited to 64 KB and a deep network architecture is trained entirely within this constraint without the use of external memory. Throughout training, the proportion of active connections is limited to 1.3%. On the handwritten digits dataset MNIST, this extremely sparse network achieves 96.6% classification accuracy at convergence. Utilizing the multi-processor feature of the SpiNNaker system, we found very good scaling in terms of computation time, per-core memory consumption, and energy constraints. When compared to a X86 CPU implementation, neural network training on the SpiNNaker 2 prototype improves power and energy consumption by two orders of magnitude. Frontiers Media S.A. 2018-11-16 /pmc/articles/PMC6250847/ /pubmed/30505263 http://dx.doi.org/10.3389/fnins.2018.00840 Text en Copyright © 2018 Liu, Bellec, Vogginger, Kappel, Partzsch, Neumärker, Höppner, Maass, Furber, Legenstein and Mayr. http://creativecommons.org/licenses/by/4.0/ This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
spellingShingle Neuroscience
Liu, Chen
Bellec, Guillaume
Vogginger, Bernhard
Kappel, David
Partzsch, Johannes
Neumärker, Felix
Höppner, Sebastian
Maass, Wolfgang
Furber, Steve B.
Legenstein, Robert
Mayr, Christian G.
Memory-Efficient Deep Learning on a SpiNNaker 2 Prototype
title Memory-Efficient Deep Learning on a SpiNNaker 2 Prototype
title_full Memory-Efficient Deep Learning on a SpiNNaker 2 Prototype
title_fullStr Memory-Efficient Deep Learning on a SpiNNaker 2 Prototype
title_full_unstemmed Memory-Efficient Deep Learning on a SpiNNaker 2 Prototype
title_short Memory-Efficient Deep Learning on a SpiNNaker 2 Prototype
title_sort memory-efficient deep learning on a spinnaker 2 prototype
topic Neuroscience
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6250847/
https://www.ncbi.nlm.nih.gov/pubmed/30505263
http://dx.doi.org/10.3389/fnins.2018.00840
work_keys_str_mv AT liuchen memoryefficientdeeplearningonaspinnaker2prototype
AT bellecguillaume memoryefficientdeeplearningonaspinnaker2prototype
AT voggingerbernhard memoryefficientdeeplearningonaspinnaker2prototype
AT kappeldavid memoryefficientdeeplearningonaspinnaker2prototype
AT partzschjohannes memoryefficientdeeplearningonaspinnaker2prototype
AT neumarkerfelix memoryefficientdeeplearningonaspinnaker2prototype
AT hoppnersebastian memoryefficientdeeplearningonaspinnaker2prototype
AT maasswolfgang memoryefficientdeeplearningonaspinnaker2prototype
AT furbersteveb memoryefficientdeeplearningonaspinnaker2prototype
AT legensteinrobert memoryefficientdeeplearningonaspinnaker2prototype
AT mayrchristiang memoryefficientdeeplearningonaspinnaker2prototype