Cargando…
Gradient Decomposition Methods for Training Neural Networks With Non-ideal Synaptic Devices
While promising for high-capacity machine learning accelerators, memristor devices have non-idealities that prevent software-equivalent accuracies when used for online training. This work uses a combination of Mini-Batch Gradient Descent (MBGD) to average gradients, stochastic rounding to avoid vani...
Autores principales: | Zhao, Junyun, Huang, Siyuan, Yousuf, Osama, Gao, Yutong, Hoskins, Brian D., Adam, Gina C. |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Frontiers Media S.A.
2021
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8645649/ https://www.ncbi.nlm.nih.gov/pubmed/34880721 http://dx.doi.org/10.3389/fnins.2021.749811 |
Ejemplares similares
-
On-Chip Training Spiking Neural Networks Using Approximated Backpropagation With Analog Synaptic Devices
por: Kwon, Dongseok, et al.
Publicado: (2020) -
Streaming Batch Eigenupdates for Hardware Neural Networks
por: Hoskins, Brian D., et al.
Publicado: (2019) -
Monomial ideals and their decompositions
por: Moore, W Frank, et al.
Publicado: (2018) -
Algorithm for Training Neural Networks on Resistive Device Arrays
por: Gokmen, Tayfun, et al.
Publicado: (2020) -
Training Deep Convolutional Neural Networks with Resistive Cross-Point Devices
por: Gokmen, Tayfun, et al.
Publicado: (2017)