Cargando…

Algorithm for Training Neural Networks on Resistive Device Arrays

Hardware architectures composed of resistive cross-point device arrays can provide significant power and speed benefits for deep neural network training workloads using stochastic gradient descent (SGD) and backpropagation (BP) algorithm. The training accuracy on this imminent analog hardware, howev...

Descripción completa

Detalles Bibliográficos
Autores principales: Gokmen, Tayfun, Haensch, Wilfried
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Frontiers Media S.A. 2020
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7054461/
https://www.ncbi.nlm.nih.gov/pubmed/32174807
http://dx.doi.org/10.3389/fnins.2020.00103
_version_ 1783503203470409728
author Gokmen, Tayfun
Haensch, Wilfried
author_facet Gokmen, Tayfun
Haensch, Wilfried
author_sort Gokmen, Tayfun
collection PubMed
description Hardware architectures composed of resistive cross-point device arrays can provide significant power and speed benefits for deep neural network training workloads using stochastic gradient descent (SGD) and backpropagation (BP) algorithm. The training accuracy on this imminent analog hardware, however, strongly depends on the switching characteristics of the cross-point elements. One of the key requirements is that these resistive devices must change conductance in a symmetrical fashion when subjected to positive or negative pulse stimuli. Here, we present a new training algorithm, so-called the “Tiki-Taka” algorithm, that eliminates this stringent symmetry requirement. We show that device asymmetry introduces an unintentional implicit cost term into the SGD algorithm, whereas in the “Tiki-Taka” algorithm a coupled dynamical system simultaneously minimizes the original objective function of the neural network and the unintentional cost term due to device asymmetry in a self-consistent fashion. We tested the validity of this new algorithm on a range of network architectures such as fully connected, convolutional and LSTM networks. Simulation results on these various networks show that the accuracy achieved using the conventional SGD algorithm with symmetric (ideal) device switching characteristics is matched in accuracy achieved using the “Tiki-Taka” algorithm with non-symmetric (non-ideal) device switching characteristics. Moreover, all the operations performed on the arrays are still parallel and therefore the implementation cost of this new algorithm on array architectures is minimal; and it maintains the aforementioned power and speed benefits. These algorithmic improvements are crucial to relax the material specification and to realize technologically viable resistive crossbar arrays that outperform digital accelerators for similar training tasks.
format Online
Article
Text
id pubmed-7054461
institution National Center for Biotechnology Information
language English
publishDate 2020
publisher Frontiers Media S.A.
record_format MEDLINE/PubMed
spelling pubmed-70544612020-03-13 Algorithm for Training Neural Networks on Resistive Device Arrays Gokmen, Tayfun Haensch, Wilfried Front Neurosci Neuroscience Hardware architectures composed of resistive cross-point device arrays can provide significant power and speed benefits for deep neural network training workloads using stochastic gradient descent (SGD) and backpropagation (BP) algorithm. The training accuracy on this imminent analog hardware, however, strongly depends on the switching characteristics of the cross-point elements. One of the key requirements is that these resistive devices must change conductance in a symmetrical fashion when subjected to positive or negative pulse stimuli. Here, we present a new training algorithm, so-called the “Tiki-Taka” algorithm, that eliminates this stringent symmetry requirement. We show that device asymmetry introduces an unintentional implicit cost term into the SGD algorithm, whereas in the “Tiki-Taka” algorithm a coupled dynamical system simultaneously minimizes the original objective function of the neural network and the unintentional cost term due to device asymmetry in a self-consistent fashion. We tested the validity of this new algorithm on a range of network architectures such as fully connected, convolutional and LSTM networks. Simulation results on these various networks show that the accuracy achieved using the conventional SGD algorithm with symmetric (ideal) device switching characteristics is matched in accuracy achieved using the “Tiki-Taka” algorithm with non-symmetric (non-ideal) device switching characteristics. Moreover, all the operations performed on the arrays are still parallel and therefore the implementation cost of this new algorithm on array architectures is minimal; and it maintains the aforementioned power and speed benefits. These algorithmic improvements are crucial to relax the material specification and to realize technologically viable resistive crossbar arrays that outperform digital accelerators for similar training tasks. Frontiers Media S.A. 2020-02-26 /pmc/articles/PMC7054461/ /pubmed/32174807 http://dx.doi.org/10.3389/fnins.2020.00103 Text en Copyright © 2020 Gokmen and Haensch. http://creativecommons.org/licenses/by/4.0/ This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
spellingShingle Neuroscience
Gokmen, Tayfun
Haensch, Wilfried
Algorithm for Training Neural Networks on Resistive Device Arrays
title Algorithm for Training Neural Networks on Resistive Device Arrays
title_full Algorithm for Training Neural Networks on Resistive Device Arrays
title_fullStr Algorithm for Training Neural Networks on Resistive Device Arrays
title_full_unstemmed Algorithm for Training Neural Networks on Resistive Device Arrays
title_short Algorithm for Training Neural Networks on Resistive Device Arrays
title_sort algorithm for training neural networks on resistive device arrays
topic Neuroscience
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7054461/
https://www.ncbi.nlm.nih.gov/pubmed/32174807
http://dx.doi.org/10.3389/fnins.2020.00103
work_keys_str_mv AT gokmentayfun algorithmfortrainingneuralnetworksonresistivedevicearrays
AT haenschwilfried algorithmfortrainingneuralnetworksonresistivedevicearrays