Cargando…

Hardware-aware training for large-scale and diverse deep learning inference workloads using in-memory computing-based accelerators

Analog in-memory computing—a promising approach for energy-efficient acceleration of deep learning workloads—computes matrix-vector multiplications but only approximately, due to nonidealities that often are non-deterministic or nonlinear. This can adversely impact the achievable inference accuracy....

Descripción completa

Detalles Bibliográficos
Autores principales: Rasch, Malte J., Mackin, Charles, Le Gallo, Manuel, Chen, An, Fasoli, Andrea, Odermatt, Frédéric, Li, Ning, Nandakumar, S. R., Narayanan, Pritish, Tsai, Hsinyu, Burr, Geoffrey W., Sebastian, Abu, Narayanan, Vijay
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Nature Publishing Group UK 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10469175/
https://www.ncbi.nlm.nih.gov/pubmed/37648721
http://dx.doi.org/10.1038/s41467-023-40770-4