Cargando…

Optimised weight programming for analogue memory-based deep neural networks

Analogue memory-based deep neural networks provide energy-efficiency and per-area throughput gains relative to state-of-the-art digital counterparts such as graphics processing units. Recent advances focus largely on hardware-aware algorithmic training and improvements to circuits, architectures, an...

Descripción completa

Detalles Bibliográficos
Autores principales: Mackin, Charles, Rasch, Malte J., Chen, An, Timcheck, Jonathan, Bruce, Robert L., Li, Ning, Narayanan, Pritish, Ambrogio, Stefano, Le Gallo, Manuel, Nandakumar, S. R., Fasoli, Andrea, Luquin, Jose, Friz, Alexander, Sebastian, Abu, Tsai, Hsinyu, Burr, Geoffrey W.
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Nature Publishing Group UK 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9247051/
https://www.ncbi.nlm.nih.gov/pubmed/35773285
http://dx.doi.org/10.1038/s41467-022-31405-1
_version_ 1784739065491357696
author Mackin, Charles
Rasch, Malte J.
Chen, An
Timcheck, Jonathan
Bruce, Robert L.
Li, Ning
Narayanan, Pritish
Ambrogio, Stefano
Le Gallo, Manuel
Nandakumar, S. R.
Fasoli, Andrea
Luquin, Jose
Friz, Alexander
Sebastian, Abu
Tsai, Hsinyu
Burr, Geoffrey W.
author_facet Mackin, Charles
Rasch, Malte J.
Chen, An
Timcheck, Jonathan
Bruce, Robert L.
Li, Ning
Narayanan, Pritish
Ambrogio, Stefano
Le Gallo, Manuel
Nandakumar, S. R.
Fasoli, Andrea
Luquin, Jose
Friz, Alexander
Sebastian, Abu
Tsai, Hsinyu
Burr, Geoffrey W.
author_sort Mackin, Charles
collection PubMed
description Analogue memory-based deep neural networks provide energy-efficiency and per-area throughput gains relative to state-of-the-art digital counterparts such as graphics processing units. Recent advances focus largely on hardware-aware algorithmic training and improvements to circuits, architectures, and memory devices. Optimal translation of software-trained weights into analogue hardware weights—given the plethora of complex memory non-idealities—represents an equally important task. We report a generalised computational framework that automates the crafting of complex weight programming strategies to minimise accuracy degradations during inference, particularly over time. The framework is agnostic to network structure and generalises well across recurrent, convolutional, and transformer neural networks. As a highly flexible numerical heuristic, the approach accommodates arbitrary device-level complexity, making it potentially relevant for a variety of analogue memories. By quantifying the limit of achievable inference accuracy, it also enables analogue memory-based deep neural network accelerators to reach their full inference potential.
format Online
Article
Text
id pubmed-9247051
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher Nature Publishing Group UK
record_format MEDLINE/PubMed
spelling pubmed-92470512022-07-02 Optimised weight programming for analogue memory-based deep neural networks Mackin, Charles Rasch, Malte J. Chen, An Timcheck, Jonathan Bruce, Robert L. Li, Ning Narayanan, Pritish Ambrogio, Stefano Le Gallo, Manuel Nandakumar, S. R. Fasoli, Andrea Luquin, Jose Friz, Alexander Sebastian, Abu Tsai, Hsinyu Burr, Geoffrey W. Nat Commun Article Analogue memory-based deep neural networks provide energy-efficiency and per-area throughput gains relative to state-of-the-art digital counterparts such as graphics processing units. Recent advances focus largely on hardware-aware algorithmic training and improvements to circuits, architectures, and memory devices. Optimal translation of software-trained weights into analogue hardware weights—given the plethora of complex memory non-idealities—represents an equally important task. We report a generalised computational framework that automates the crafting of complex weight programming strategies to minimise accuracy degradations during inference, particularly over time. The framework is agnostic to network structure and generalises well across recurrent, convolutional, and transformer neural networks. As a highly flexible numerical heuristic, the approach accommodates arbitrary device-level complexity, making it potentially relevant for a variety of analogue memories. By quantifying the limit of achievable inference accuracy, it also enables analogue memory-based deep neural network accelerators to reach their full inference potential. Nature Publishing Group UK 2022-06-30 /pmc/articles/PMC9247051/ /pubmed/35773285 http://dx.doi.org/10.1038/s41467-022-31405-1 Text en © The Author(s) 2022 https://creativecommons.org/licenses/by/4.0/Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) .
spellingShingle Article
Mackin, Charles
Rasch, Malte J.
Chen, An
Timcheck, Jonathan
Bruce, Robert L.
Li, Ning
Narayanan, Pritish
Ambrogio, Stefano
Le Gallo, Manuel
Nandakumar, S. R.
Fasoli, Andrea
Luquin, Jose
Friz, Alexander
Sebastian, Abu
Tsai, Hsinyu
Burr, Geoffrey W.
Optimised weight programming for analogue memory-based deep neural networks
title Optimised weight programming for analogue memory-based deep neural networks
title_full Optimised weight programming for analogue memory-based deep neural networks
title_fullStr Optimised weight programming for analogue memory-based deep neural networks
title_full_unstemmed Optimised weight programming for analogue memory-based deep neural networks
title_short Optimised weight programming for analogue memory-based deep neural networks
title_sort optimised weight programming for analogue memory-based deep neural networks
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9247051/
https://www.ncbi.nlm.nih.gov/pubmed/35773285
http://dx.doi.org/10.1038/s41467-022-31405-1
work_keys_str_mv AT mackincharles optimisedweightprogrammingforanaloguememorybaseddeepneuralnetworks
AT raschmaltej optimisedweightprogrammingforanaloguememorybaseddeepneuralnetworks
AT chenan optimisedweightprogrammingforanaloguememorybaseddeepneuralnetworks
AT timcheckjonathan optimisedweightprogrammingforanaloguememorybaseddeepneuralnetworks
AT brucerobertl optimisedweightprogrammingforanaloguememorybaseddeepneuralnetworks
AT lining optimisedweightprogrammingforanaloguememorybaseddeepneuralnetworks
AT narayananpritish optimisedweightprogrammingforanaloguememorybaseddeepneuralnetworks
AT ambrogiostefano optimisedweightprogrammingforanaloguememorybaseddeepneuralnetworks
AT legallomanuel optimisedweightprogrammingforanaloguememorybaseddeepneuralnetworks
AT nandakumarsr optimisedweightprogrammingforanaloguememorybaseddeepneuralnetworks
AT fasoliandrea optimisedweightprogrammingforanaloguememorybaseddeepneuralnetworks
AT luquinjose optimisedweightprogrammingforanaloguememorybaseddeepneuralnetworks
AT frizalexander optimisedweightprogrammingforanaloguememorybaseddeepneuralnetworks
AT sebastianabu optimisedweightprogrammingforanaloguememorybaseddeepneuralnetworks
AT tsaihsinyu optimisedweightprogrammingforanaloguememorybaseddeepneuralnetworks
AT burrgeoffreyw optimisedweightprogrammingforanaloguememorybaseddeepneuralnetworks