Cargando…

Mixed-Precision Deep Learning Based on Computational Memory

Deep neural networks (DNNs) have revolutionized the field of artificial intelligence and have achieved unprecedented success in cognitive tasks such as image and speech recognition. Training of large DNNs, however, is computationally intensive and this has motivated the search for novel computing ar...

Descripción completa

Detalles Bibliográficos
Autores principales: Nandakumar, S. R., Le Gallo, Manuel, Piveteau, Christophe, Joshi, Vinay, Mariani, Giovanni, Boybat, Irem, Karunaratne, Geethan, Khaddam-Aljameh, Riduan, Egger, Urs, Petropoulos, Anastasios, Antonakopoulos, Theodore, Rajendran, Bipin, Sebastian, Abu, Eleftheriou, Evangelos
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Frontiers Media S.A. 2020
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7235420/
https://www.ncbi.nlm.nih.gov/pubmed/32477047
http://dx.doi.org/10.3389/fnins.2020.00406
_version_ 1783535964152397824
author Nandakumar, S. R.
Le Gallo, Manuel
Piveteau, Christophe
Joshi, Vinay
Mariani, Giovanni
Boybat, Irem
Karunaratne, Geethan
Khaddam-Aljameh, Riduan
Egger, Urs
Petropoulos, Anastasios
Antonakopoulos, Theodore
Rajendran, Bipin
Sebastian, Abu
Eleftheriou, Evangelos
author_facet Nandakumar, S. R.
Le Gallo, Manuel
Piveteau, Christophe
Joshi, Vinay
Mariani, Giovanni
Boybat, Irem
Karunaratne, Geethan
Khaddam-Aljameh, Riduan
Egger, Urs
Petropoulos, Anastasios
Antonakopoulos, Theodore
Rajendran, Bipin
Sebastian, Abu
Eleftheriou, Evangelos
author_sort Nandakumar, S. R.
collection PubMed
description Deep neural networks (DNNs) have revolutionized the field of artificial intelligence and have achieved unprecedented success in cognitive tasks such as image and speech recognition. Training of large DNNs, however, is computationally intensive and this has motivated the search for novel computing architectures targeting this application. A computational memory unit with nanoscale resistive memory devices organized in crossbar arrays could store the synaptic weights in their conductance states and perform the expensive weighted summations in place in a non-von Neumann manner. However, updating the conductance states in a reliable manner during the weight update process is a fundamental challenge that limits the training accuracy of such an implementation. Here, we propose a mixed-precision architecture that combines a computational memory unit performing the weighted summations and imprecise conductance updates with a digital processing unit that accumulates the weight updates in high precision. A combined hardware/software training experiment of a multilayer perceptron based on the proposed architecture using a phase-change memory (PCM) array achieves 97.73% test accuracy on the task of classifying handwritten digits (based on the MNIST dataset), within 0.6% of the software baseline. The architecture is further evaluated using accurate behavioral models of PCM on a wide class of networks, namely convolutional neural networks, long-short-term-memory networks, and generative-adversarial networks. Accuracies comparable to those of floating-point implementations are achieved without being constrained by the non-idealities associated with the PCM devices. A system-level study demonstrates 172 × improvement in energy efficiency of the architecture when used for training a multilayer perceptron compared with a dedicated fully digital 32-bit implementation.
format Online
Article
Text
id pubmed-7235420
institution National Center for Biotechnology Information
language English
publishDate 2020
publisher Frontiers Media S.A.
record_format MEDLINE/PubMed
spelling pubmed-72354202020-05-29 Mixed-Precision Deep Learning Based on Computational Memory Nandakumar, S. R. Le Gallo, Manuel Piveteau, Christophe Joshi, Vinay Mariani, Giovanni Boybat, Irem Karunaratne, Geethan Khaddam-Aljameh, Riduan Egger, Urs Petropoulos, Anastasios Antonakopoulos, Theodore Rajendran, Bipin Sebastian, Abu Eleftheriou, Evangelos Front Neurosci Neuroscience Deep neural networks (DNNs) have revolutionized the field of artificial intelligence and have achieved unprecedented success in cognitive tasks such as image and speech recognition. Training of large DNNs, however, is computationally intensive and this has motivated the search for novel computing architectures targeting this application. A computational memory unit with nanoscale resistive memory devices organized in crossbar arrays could store the synaptic weights in their conductance states and perform the expensive weighted summations in place in a non-von Neumann manner. However, updating the conductance states in a reliable manner during the weight update process is a fundamental challenge that limits the training accuracy of such an implementation. Here, we propose a mixed-precision architecture that combines a computational memory unit performing the weighted summations and imprecise conductance updates with a digital processing unit that accumulates the weight updates in high precision. A combined hardware/software training experiment of a multilayer perceptron based on the proposed architecture using a phase-change memory (PCM) array achieves 97.73% test accuracy on the task of classifying handwritten digits (based on the MNIST dataset), within 0.6% of the software baseline. The architecture is further evaluated using accurate behavioral models of PCM on a wide class of networks, namely convolutional neural networks, long-short-term-memory networks, and generative-adversarial networks. Accuracies comparable to those of floating-point implementations are achieved without being constrained by the non-idealities associated with the PCM devices. A system-level study demonstrates 172 × improvement in energy efficiency of the architecture when used for training a multilayer perceptron compared with a dedicated fully digital 32-bit implementation. Frontiers Media S.A. 2020-05-12 /pmc/articles/PMC7235420/ /pubmed/32477047 http://dx.doi.org/10.3389/fnins.2020.00406 Text en Copyright © 2020 Nandakumar, Le Gallo, Piveteau, Joshi, Mariani, Boybat, Karunaratne, Khaddam-Aljameh, Egger, Petropoulos, Antonakopoulos, Rajendran, Sebastian and Eleftheriou. http://creativecommons.org/licenses/by/4.0/ This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
spellingShingle Neuroscience
Nandakumar, S. R.
Le Gallo, Manuel
Piveteau, Christophe
Joshi, Vinay
Mariani, Giovanni
Boybat, Irem
Karunaratne, Geethan
Khaddam-Aljameh, Riduan
Egger, Urs
Petropoulos, Anastasios
Antonakopoulos, Theodore
Rajendran, Bipin
Sebastian, Abu
Eleftheriou, Evangelos
Mixed-Precision Deep Learning Based on Computational Memory
title Mixed-Precision Deep Learning Based on Computational Memory
title_full Mixed-Precision Deep Learning Based on Computational Memory
title_fullStr Mixed-Precision Deep Learning Based on Computational Memory
title_full_unstemmed Mixed-Precision Deep Learning Based on Computational Memory
title_short Mixed-Precision Deep Learning Based on Computational Memory
title_sort mixed-precision deep learning based on computational memory
topic Neuroscience
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7235420/
https://www.ncbi.nlm.nih.gov/pubmed/32477047
http://dx.doi.org/10.3389/fnins.2020.00406
work_keys_str_mv AT nandakumarsr mixedprecisiondeeplearningbasedoncomputationalmemory
AT legallomanuel mixedprecisiondeeplearningbasedoncomputationalmemory
AT piveteauchristophe mixedprecisiondeeplearningbasedoncomputationalmemory
AT joshivinay mixedprecisiondeeplearningbasedoncomputationalmemory
AT marianigiovanni mixedprecisiondeeplearningbasedoncomputationalmemory
AT boybatirem mixedprecisiondeeplearningbasedoncomputationalmemory
AT karunaratnegeethan mixedprecisiondeeplearningbasedoncomputationalmemory
AT khaddamaljamehriduan mixedprecisiondeeplearningbasedoncomputationalmemory
AT eggerurs mixedprecisiondeeplearningbasedoncomputationalmemory
AT petropoulosanastasios mixedprecisiondeeplearningbasedoncomputationalmemory
AT antonakopoulostheodore mixedprecisiondeeplearningbasedoncomputationalmemory
AT rajendranbipin mixedprecisiondeeplearningbasedoncomputationalmemory
AT sebastianabu mixedprecisiondeeplearningbasedoncomputationalmemory
AT eleftheriouevangelos mixedprecisiondeeplearningbasedoncomputationalmemory