Cargando…

Streaming Batch Eigenupdates for Hardware Neural Networks

Neural networks based on nanodevices, such as metal oxide memristors, phase change memories, and flash memory cells, have generated considerable interest for their increased energy efficiency and density in comparison to graphics processing units (GPUs) and central processing units (CPUs). Though im...

Descripción completa

Detalles Bibliográficos
Autores principales: Hoskins, Brian D., Daniels, Matthew W., Huang, Siyuan, Madhavan, Advait, Adam, Gina C., Zhitenev, Nikolai, McClelland, Jabez J., Stiles, Mark D.
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Frontiers Media S.A. 2019
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6691093/
https://www.ncbi.nlm.nih.gov/pubmed/31447628
http://dx.doi.org/10.3389/fnins.2019.00793
_version_ 1783443292108619776
author Hoskins, Brian D.
Daniels, Matthew W.
Huang, Siyuan
Madhavan, Advait
Adam, Gina C.
Zhitenev, Nikolai
McClelland, Jabez J.
Stiles, Mark D.
author_facet Hoskins, Brian D.
Daniels, Matthew W.
Huang, Siyuan
Madhavan, Advait
Adam, Gina C.
Zhitenev, Nikolai
McClelland, Jabez J.
Stiles, Mark D.
author_sort Hoskins, Brian D.
collection PubMed
description Neural networks based on nanodevices, such as metal oxide memristors, phase change memories, and flash memory cells, have generated considerable interest for their increased energy efficiency and density in comparison to graphics processing units (GPUs) and central processing units (CPUs). Though immense acceleration of the training process can be achieved by leveraging the fact that the time complexity of training does not scale with the network size, it is limited by the space complexity of stochastic gradient descent, which grows quadratically. The main objective of this work is to reduce this space complexity by using low-rank approximations of stochastic gradient descent. This low spatial complexity combined with streaming methods allows for significant reductions in memory and compute overhead, opening the door for improvements in area, time and energy efficiency of training. We refer to this algorithm and architecture to implement it as the streaming batch eigenupdate (SBE) approach.
format Online
Article
Text
id pubmed-6691093
institution National Center for Biotechnology Information
language English
publishDate 2019
publisher Frontiers Media S.A.
record_format MEDLINE/PubMed
spelling pubmed-66910932019-08-23 Streaming Batch Eigenupdates for Hardware Neural Networks Hoskins, Brian D. Daniels, Matthew W. Huang, Siyuan Madhavan, Advait Adam, Gina C. Zhitenev, Nikolai McClelland, Jabez J. Stiles, Mark D. Front Neurosci Neuroscience Neural networks based on nanodevices, such as metal oxide memristors, phase change memories, and flash memory cells, have generated considerable interest for their increased energy efficiency and density in comparison to graphics processing units (GPUs) and central processing units (CPUs). Though immense acceleration of the training process can be achieved by leveraging the fact that the time complexity of training does not scale with the network size, it is limited by the space complexity of stochastic gradient descent, which grows quadratically. The main objective of this work is to reduce this space complexity by using low-rank approximations of stochastic gradient descent. This low spatial complexity combined with streaming methods allows for significant reductions in memory and compute overhead, opening the door for improvements in area, time and energy efficiency of training. We refer to this algorithm and architecture to implement it as the streaming batch eigenupdate (SBE) approach. Frontiers Media S.A. 2019-08-06 /pmc/articles/PMC6691093/ /pubmed/31447628 http://dx.doi.org/10.3389/fnins.2019.00793 Text en Copyright © 2019 Hoskins, Daniels, Huang, Madhavan, Adam, Zhitenev, McClelland and Stiles. http://creativecommons.org/licenses/by/4.0/ This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
spellingShingle Neuroscience
Hoskins, Brian D.
Daniels, Matthew W.
Huang, Siyuan
Madhavan, Advait
Adam, Gina C.
Zhitenev, Nikolai
McClelland, Jabez J.
Stiles, Mark D.
Streaming Batch Eigenupdates for Hardware Neural Networks
title Streaming Batch Eigenupdates for Hardware Neural Networks
title_full Streaming Batch Eigenupdates for Hardware Neural Networks
title_fullStr Streaming Batch Eigenupdates for Hardware Neural Networks
title_full_unstemmed Streaming Batch Eigenupdates for Hardware Neural Networks
title_short Streaming Batch Eigenupdates for Hardware Neural Networks
title_sort streaming batch eigenupdates for hardware neural networks
topic Neuroscience
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6691093/
https://www.ncbi.nlm.nih.gov/pubmed/31447628
http://dx.doi.org/10.3389/fnins.2019.00793
work_keys_str_mv AT hoskinsbriand streamingbatcheigenupdatesforhardwareneuralnetworks
AT danielsmattheww streamingbatcheigenupdatesforhardwareneuralnetworks
AT huangsiyuan streamingbatcheigenupdatesforhardwareneuralnetworks
AT madhavanadvait streamingbatcheigenupdatesforhardwareneuralnetworks
AT adamginac streamingbatcheigenupdatesforhardwareneuralnetworks
AT zhitenevnikolai streamingbatcheigenupdatesforhardwareneuralnetworks
AT mcclellandjabezj streamingbatcheigenupdatesforhardwareneuralnetworks
AT stilesmarkd streamingbatcheigenupdatesforhardwareneuralnetworks