Cargando…
Toward Software-Equivalent Accuracy on Transformer-Based Deep Neural Networks With Analog Memory Devices
Recent advances in deep learning have been driven by ever-increasing model sizes, with networks growing to millions or even billions of parameters. Such enormous models call for fast and energy-efficient hardware accelerators. We study the potential of Analog AI accelerators based on Non-Volatile Me...
Autores principales: | Spoon, Katie, Tsai, Hsinyu, Chen, An, Rasch, Malte J., Ambrogio, Stefano, Mackin, Charles, Fasoli, Andrea, Friz, Alexander M., Narayanan, Pritish, Stanisavljevic, Milos, Burr, Geoffrey W. |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Frontiers Media S.A.
2021
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8287521/ https://www.ncbi.nlm.nih.gov/pubmed/34290595 http://dx.doi.org/10.3389/fncom.2021.675741 |
Ejemplares similares
-
Optimised weight programming for analogue memory-based deep neural networks
por: Mackin, Charles, et al.
Publicado: (2022) -
Hardware-aware training for large-scale and diverse deep learning inference workloads using in-memory computing-based accelerators
por: Rasch, Malte J., et al.
Publicado: (2023) -
An analog-AI chip for energy-efficient speech recognition and transcription
por: Ambrogio, S., et al.
Publicado: (2023) -
Achieving software-equivalent accuracy for hyperdimensional computing with ferroelectric-based in-memory computing
por: Kazemi, Arman, et al.
Publicado: (2022) -
New trends in fusion research
por: Fasoli, Ambrogio
Publicado: (2004)