Cargando…

Hardware-aware training for large-scale and diverse deep learning inference workloads using in-memory computing-based accelerators

Analog in-memory computing—a promising approach for energy-efficient acceleration of deep learning workloads—computes matrix-vector multiplications but only approximately, due to nonidealities that often are non-deterministic or nonlinear. This can adversely impact the achievable inference accuracy....

Descripción completa

Detalles Bibliográficos
Autores principales: Rasch, Malte J., Mackin, Charles, Le Gallo, Manuel, Chen, An, Fasoli, Andrea, Odermatt, Frédéric, Li, Ning, Nandakumar, S. R., Narayanan, Pritish, Tsai, Hsinyu, Burr, Geoffrey W., Sebastian, Abu, Narayanan, Vijay
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Nature Publishing Group UK 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10469175/
https://www.ncbi.nlm.nih.gov/pubmed/37648721
http://dx.doi.org/10.1038/s41467-023-40770-4
_version_ 1785099384130633728
author Rasch, Malte J.
Mackin, Charles
Le Gallo, Manuel
Chen, An
Fasoli, Andrea
Odermatt, Frédéric
Li, Ning
Nandakumar, S. R.
Narayanan, Pritish
Tsai, Hsinyu
Burr, Geoffrey W.
Sebastian, Abu
Narayanan, Vijay
author_facet Rasch, Malte J.
Mackin, Charles
Le Gallo, Manuel
Chen, An
Fasoli, Andrea
Odermatt, Frédéric
Li, Ning
Nandakumar, S. R.
Narayanan, Pritish
Tsai, Hsinyu
Burr, Geoffrey W.
Sebastian, Abu
Narayanan, Vijay
author_sort Rasch, Malte J.
collection PubMed
description Analog in-memory computing—a promising approach for energy-efficient acceleration of deep learning workloads—computes matrix-vector multiplications but only approximately, due to nonidealities that often are non-deterministic or nonlinear. This can adversely impact the achievable inference accuracy. Here, we develop an hardware-aware retraining approach to systematically examine the accuracy of analog in-memory computing across multiple network topologies, and investigate sensitivity and robustness to a broad set of nonidealities. By introducing a realistic crossbar model, we improve significantly on earlier retraining approaches. We show that many larger-scale deep neural networks—including convnets, recurrent networks, and transformers—can in fact be successfully retrained to show iso-accuracy with the floating point implementation. Our results further suggest that nonidealities that add noise to the inputs or outputs, not the weights, have the largest impact on accuracy, and that recurrent networks are particularly robust to all nonidealities.
format Online
Article
Text
id pubmed-10469175
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher Nature Publishing Group UK
record_format MEDLINE/PubMed
spelling pubmed-104691752023-09-01 Hardware-aware training for large-scale and diverse deep learning inference workloads using in-memory computing-based accelerators Rasch, Malte J. Mackin, Charles Le Gallo, Manuel Chen, An Fasoli, Andrea Odermatt, Frédéric Li, Ning Nandakumar, S. R. Narayanan, Pritish Tsai, Hsinyu Burr, Geoffrey W. Sebastian, Abu Narayanan, Vijay Nat Commun Article Analog in-memory computing—a promising approach for energy-efficient acceleration of deep learning workloads—computes matrix-vector multiplications but only approximately, due to nonidealities that often are non-deterministic or nonlinear. This can adversely impact the achievable inference accuracy. Here, we develop an hardware-aware retraining approach to systematically examine the accuracy of analog in-memory computing across multiple network topologies, and investigate sensitivity and robustness to a broad set of nonidealities. By introducing a realistic crossbar model, we improve significantly on earlier retraining approaches. We show that many larger-scale deep neural networks—including convnets, recurrent networks, and transformers—can in fact be successfully retrained to show iso-accuracy with the floating point implementation. Our results further suggest that nonidealities that add noise to the inputs or outputs, not the weights, have the largest impact on accuracy, and that recurrent networks are particularly robust to all nonidealities. Nature Publishing Group UK 2023-08-30 /pmc/articles/PMC10469175/ /pubmed/37648721 http://dx.doi.org/10.1038/s41467-023-40770-4 Text en © The Author(s) 2023 https://creativecommons.org/licenses/by/4.0/Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) .
spellingShingle Article
Rasch, Malte J.
Mackin, Charles
Le Gallo, Manuel
Chen, An
Fasoli, Andrea
Odermatt, Frédéric
Li, Ning
Nandakumar, S. R.
Narayanan, Pritish
Tsai, Hsinyu
Burr, Geoffrey W.
Sebastian, Abu
Narayanan, Vijay
Hardware-aware training for large-scale and diverse deep learning inference workloads using in-memory computing-based accelerators
title Hardware-aware training for large-scale and diverse deep learning inference workloads using in-memory computing-based accelerators
title_full Hardware-aware training for large-scale and diverse deep learning inference workloads using in-memory computing-based accelerators
title_fullStr Hardware-aware training for large-scale and diverse deep learning inference workloads using in-memory computing-based accelerators
title_full_unstemmed Hardware-aware training for large-scale and diverse deep learning inference workloads using in-memory computing-based accelerators
title_short Hardware-aware training for large-scale and diverse deep learning inference workloads using in-memory computing-based accelerators
title_sort hardware-aware training for large-scale and diverse deep learning inference workloads using in-memory computing-based accelerators
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10469175/
https://www.ncbi.nlm.nih.gov/pubmed/37648721
http://dx.doi.org/10.1038/s41467-023-40770-4
work_keys_str_mv AT raschmaltej hardwareawaretrainingforlargescaleanddiversedeeplearninginferenceworkloadsusinginmemorycomputingbasedaccelerators
AT mackincharles hardwareawaretrainingforlargescaleanddiversedeeplearninginferenceworkloadsusinginmemorycomputingbasedaccelerators
AT legallomanuel hardwareawaretrainingforlargescaleanddiversedeeplearninginferenceworkloadsusinginmemorycomputingbasedaccelerators
AT chenan hardwareawaretrainingforlargescaleanddiversedeeplearninginferenceworkloadsusinginmemorycomputingbasedaccelerators
AT fasoliandrea hardwareawaretrainingforlargescaleanddiversedeeplearninginferenceworkloadsusinginmemorycomputingbasedaccelerators
AT odermattfrederic hardwareawaretrainingforlargescaleanddiversedeeplearninginferenceworkloadsusinginmemorycomputingbasedaccelerators
AT lining hardwareawaretrainingforlargescaleanddiversedeeplearninginferenceworkloadsusinginmemorycomputingbasedaccelerators
AT nandakumarsr hardwareawaretrainingforlargescaleanddiversedeeplearninginferenceworkloadsusinginmemorycomputingbasedaccelerators
AT narayananpritish hardwareawaretrainingforlargescaleanddiversedeeplearninginferenceworkloadsusinginmemorycomputingbasedaccelerators
AT tsaihsinyu hardwareawaretrainingforlargescaleanddiversedeeplearninginferenceworkloadsusinginmemorycomputingbasedaccelerators
AT burrgeoffreyw hardwareawaretrainingforlargescaleanddiversedeeplearninginferenceworkloadsusinginmemorycomputingbasedaccelerators
AT sebastianabu hardwareawaretrainingforlargescaleanddiversedeeplearninginferenceworkloadsusinginmemorycomputingbasedaccelerators
AT narayananvijay hardwareawaretrainingforlargescaleanddiversedeeplearninginferenceworkloadsusinginmemorycomputingbasedaccelerators