Cargando…
Hardware-aware training for large-scale and diverse deep learning inference workloads using in-memory computing-based accelerators
Analog in-memory computing—a promising approach for energy-efficient acceleration of deep learning workloads—computes matrix-vector multiplications but only approximately, due to nonidealities that often are non-deterministic or nonlinear. This can adversely impact the achievable inference accuracy....
Autores principales: | Rasch, Malte J., Mackin, Charles, Le Gallo, Manuel, Chen, An, Fasoli, Andrea, Odermatt, Frédéric, Li, Ning, Nandakumar, S. R., Narayanan, Pritish, Tsai, Hsinyu, Burr, Geoffrey W., Sebastian, Abu, Narayanan, Vijay |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Nature Publishing Group UK
2023
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10469175/ https://www.ncbi.nlm.nih.gov/pubmed/37648721 http://dx.doi.org/10.1038/s41467-023-40770-4 |
Ejemplares similares
-
Toward Software-Equivalent Accuracy on Transformer-Based Deep Neural Networks With Analog Memory Devices
por: Spoon, Katie, et al.
Publicado: (2021) -
Optimised weight programming for analogue memory-based deep neural networks
por: Mackin, Charles, et al.
Publicado: (2022) -
Hardware Accelerated ATLAS Workloads on the WLCG
por: Forti, Alessandra, et al.
Publicado: (2019) -
Hardware Accelerated ATLAS Workloads on the WLCG grid
por: Forti, Alessandra, et al.
Publicado: (2019) -
Brain-Inspired Hardware Solutions for Inference in Bayesian Networks
por: Bagheriye, Leila, et al.
Publicado: (2021)