Cargando…
Training LSTM Networks With Resistive Cross-Point Devices
In our previous work we have shown that resistive cross point devices, so called resistive processing unit (RPU) devices, can provide significant power and speed benefits when training deep fully connected networks as well as convolutional neural networks. In this work, we further extend the RPU con...
Autores principales: | , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Frontiers Media S.A.
2018
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6207602/ https://www.ncbi.nlm.nih.gov/pubmed/30405334 http://dx.doi.org/10.3389/fnins.2018.00745 |
_version_ | 1783366541442547712 |
---|---|
author | Gokmen, Tayfun Rasch, Malte J. Haensch, Wilfried |
author_facet | Gokmen, Tayfun Rasch, Malte J. Haensch, Wilfried |
author_sort | Gokmen, Tayfun |
collection | PubMed |
description | In our previous work we have shown that resistive cross point devices, so called resistive processing unit (RPU) devices, can provide significant power and speed benefits when training deep fully connected networks as well as convolutional neural networks. In this work, we further extend the RPU concept for training recurrent neural networks (RNNs) namely LSTMs. We show that the mapping of recurrent layers is very similar to the mapping of fully connected layers and therefore the RPU concept can potentially provide large acceleration factors for RNNs as well. In addition, we study the effect of various device imperfections and system parameters on training performance. Symmetry of updates becomes even more crucial for RNNs; already a few percent asymmetry results in an increase in the test error compared to the ideal case trained with floating point numbers. Furthermore, the input signal resolution to the device arrays needs to be at least 7 bits for successful training. However, we show that a stochastic rounding scheme can reduce the input signal resolution back to 5 bits. Further, we find that RPU device variations and hardware noise are enough to mitigate overfitting, so that there is less need for using dropout. Here we attempt to study the validity of the RPU approach by simulating large scale networks. For instance, the models studied here are roughly 1500 times larger than the more often studied multilayer perceptron models trained on the MNIST dataset in terms of the total number of multiplication and summation operations performed per epoch. |
format | Online Article Text |
id | pubmed-6207602 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2018 |
publisher | Frontiers Media S.A. |
record_format | MEDLINE/PubMed |
spelling | pubmed-62076022018-11-07 Training LSTM Networks With Resistive Cross-Point Devices Gokmen, Tayfun Rasch, Malte J. Haensch, Wilfried Front Neurosci Neuroscience In our previous work we have shown that resistive cross point devices, so called resistive processing unit (RPU) devices, can provide significant power and speed benefits when training deep fully connected networks as well as convolutional neural networks. In this work, we further extend the RPU concept for training recurrent neural networks (RNNs) namely LSTMs. We show that the mapping of recurrent layers is very similar to the mapping of fully connected layers and therefore the RPU concept can potentially provide large acceleration factors for RNNs as well. In addition, we study the effect of various device imperfections and system parameters on training performance. Symmetry of updates becomes even more crucial for RNNs; already a few percent asymmetry results in an increase in the test error compared to the ideal case trained with floating point numbers. Furthermore, the input signal resolution to the device arrays needs to be at least 7 bits for successful training. However, we show that a stochastic rounding scheme can reduce the input signal resolution back to 5 bits. Further, we find that RPU device variations and hardware noise are enough to mitigate overfitting, so that there is less need for using dropout. Here we attempt to study the validity of the RPU approach by simulating large scale networks. For instance, the models studied here are roughly 1500 times larger than the more often studied multilayer perceptron models trained on the MNIST dataset in terms of the total number of multiplication and summation operations performed per epoch. Frontiers Media S.A. 2018-10-24 /pmc/articles/PMC6207602/ /pubmed/30405334 http://dx.doi.org/10.3389/fnins.2018.00745 Text en Copyright © 2018 Gokmen, Rasch and Haensch. http://creativecommons.org/licenses/by/4.0/ This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms. |
spellingShingle | Neuroscience Gokmen, Tayfun Rasch, Malte J. Haensch, Wilfried Training LSTM Networks With Resistive Cross-Point Devices |
title | Training LSTM Networks With Resistive Cross-Point Devices |
title_full | Training LSTM Networks With Resistive Cross-Point Devices |
title_fullStr | Training LSTM Networks With Resistive Cross-Point Devices |
title_full_unstemmed | Training LSTM Networks With Resistive Cross-Point Devices |
title_short | Training LSTM Networks With Resistive Cross-Point Devices |
title_sort | training lstm networks with resistive cross-point devices |
topic | Neuroscience |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6207602/ https://www.ncbi.nlm.nih.gov/pubmed/30405334 http://dx.doi.org/10.3389/fnins.2018.00745 |
work_keys_str_mv | AT gokmentayfun traininglstmnetworkswithresistivecrosspointdevices AT raschmaltej traininglstmnetworkswithresistivecrosspointdevices AT haenschwilfried traininglstmnetworkswithresistivecrosspointdevices |