Cargando…

In situ Parallel Training of Analog Neural Network Using Electrochemical Random-Access Memory

In-memory computing based on non-volatile resistive memory can significantly improve the energy efficiency of artificial neural networks. However, accurate in situ training has been challenging due to the nonlinear and stochastic switching of the resistive memory elements. One promising analog memor...

Descripción completa

Detalles Bibliográficos
Autores principales: Li, Yiyang, Xiao, T. Patrick, Bennett, Christopher H., Isele, Erik, Melianas, Armantas, Tao, Hanbo, Marinella, Matthew J., Salleo, Alberto, Fuller, Elliot J., Talin, A. Alec
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Frontiers Media S.A. 2021
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8060477/
https://www.ncbi.nlm.nih.gov/pubmed/33897351
http://dx.doi.org/10.3389/fnins.2021.636127
_version_ 1783681371624964096
author Li, Yiyang
Xiao, T. Patrick
Bennett, Christopher H.
Isele, Erik
Melianas, Armantas
Tao, Hanbo
Marinella, Matthew J.
Salleo, Alberto
Fuller, Elliot J.
Talin, A. Alec
author_facet Li, Yiyang
Xiao, T. Patrick
Bennett, Christopher H.
Isele, Erik
Melianas, Armantas
Tao, Hanbo
Marinella, Matthew J.
Salleo, Alberto
Fuller, Elliot J.
Talin, A. Alec
author_sort Li, Yiyang
collection PubMed
description In-memory computing based on non-volatile resistive memory can significantly improve the energy efficiency of artificial neural networks. However, accurate in situ training has been challenging due to the nonlinear and stochastic switching of the resistive memory elements. One promising analog memory is the electrochemical random-access memory (ECRAM), also known as the redox transistor. Its low write currents and linear switching properties across hundreds of analog states enable accurate and massively parallel updates of a full crossbar array, which yield rapid and energy-efficient training. While simulations predict that ECRAM based neural networks achieve high training accuracy at significantly higher energy efficiency than digital implementations, these predictions have not been experimentally achieved. In this work, we train a 3 × 3 array of ECRAM devices that learns to discriminate several elementary logic gates (AND, OR, NAND). We record the evolution of the network’s synaptic weights during parallel in situ (on-line) training, with outer product updates. Due to linear and reproducible device switching characteristics, our crossbar simulations not only accurately simulate the epochs to convergence, but also quantitatively capture the evolution of weights in individual devices. The implementation of the first in situ parallel training together with strong agreement with simulation results provides a significant advance toward developing ECRAM into larger crossbar arrays for artificial neural network accelerators, which could enable orders of magnitude improvements in energy efficiency of deep neural networks.
format Online
Article
Text
id pubmed-8060477
institution National Center for Biotechnology Information
language English
publishDate 2021
publisher Frontiers Media S.A.
record_format MEDLINE/PubMed
spelling pubmed-80604772021-04-23 In situ Parallel Training of Analog Neural Network Using Electrochemical Random-Access Memory Li, Yiyang Xiao, T. Patrick Bennett, Christopher H. Isele, Erik Melianas, Armantas Tao, Hanbo Marinella, Matthew J. Salleo, Alberto Fuller, Elliot J. Talin, A. Alec Front Neurosci Neuroscience In-memory computing based on non-volatile resistive memory can significantly improve the energy efficiency of artificial neural networks. However, accurate in situ training has been challenging due to the nonlinear and stochastic switching of the resistive memory elements. One promising analog memory is the electrochemical random-access memory (ECRAM), also known as the redox transistor. Its low write currents and linear switching properties across hundreds of analog states enable accurate and massively parallel updates of a full crossbar array, which yield rapid and energy-efficient training. While simulations predict that ECRAM based neural networks achieve high training accuracy at significantly higher energy efficiency than digital implementations, these predictions have not been experimentally achieved. In this work, we train a 3 × 3 array of ECRAM devices that learns to discriminate several elementary logic gates (AND, OR, NAND). We record the evolution of the network’s synaptic weights during parallel in situ (on-line) training, with outer product updates. Due to linear and reproducible device switching characteristics, our crossbar simulations not only accurately simulate the epochs to convergence, but also quantitatively capture the evolution of weights in individual devices. The implementation of the first in situ parallel training together with strong agreement with simulation results provides a significant advance toward developing ECRAM into larger crossbar arrays for artificial neural network accelerators, which could enable orders of magnitude improvements in energy efficiency of deep neural networks. Frontiers Media S.A. 2021-04-08 /pmc/articles/PMC8060477/ /pubmed/33897351 http://dx.doi.org/10.3389/fnins.2021.636127 Text en Copyright © 2021 Li, Xiao, Bennett, Isele, Melianas, Tao, Marinella, Salleo, Fuller and Talin. https://creativecommons.org/licenses/by/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
spellingShingle Neuroscience
Li, Yiyang
Xiao, T. Patrick
Bennett, Christopher H.
Isele, Erik
Melianas, Armantas
Tao, Hanbo
Marinella, Matthew J.
Salleo, Alberto
Fuller, Elliot J.
Talin, A. Alec
In situ Parallel Training of Analog Neural Network Using Electrochemical Random-Access Memory
title In situ Parallel Training of Analog Neural Network Using Electrochemical Random-Access Memory
title_full In situ Parallel Training of Analog Neural Network Using Electrochemical Random-Access Memory
title_fullStr In situ Parallel Training of Analog Neural Network Using Electrochemical Random-Access Memory
title_full_unstemmed In situ Parallel Training of Analog Neural Network Using Electrochemical Random-Access Memory
title_short In situ Parallel Training of Analog Neural Network Using Electrochemical Random-Access Memory
title_sort in situ parallel training of analog neural network using electrochemical random-access memory
topic Neuroscience
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8060477/
https://www.ncbi.nlm.nih.gov/pubmed/33897351
http://dx.doi.org/10.3389/fnins.2021.636127
work_keys_str_mv AT liyiyang insituparalleltrainingofanalogneuralnetworkusingelectrochemicalrandomaccessmemory
AT xiaotpatrick insituparalleltrainingofanalogneuralnetworkusingelectrochemicalrandomaccessmemory
AT bennettchristopherh insituparalleltrainingofanalogneuralnetworkusingelectrochemicalrandomaccessmemory
AT iseleerik insituparalleltrainingofanalogneuralnetworkusingelectrochemicalrandomaccessmemory
AT melianasarmantas insituparalleltrainingofanalogneuralnetworkusingelectrochemicalrandomaccessmemory
AT taohanbo insituparalleltrainingofanalogneuralnetworkusingelectrochemicalrandomaccessmemory
AT marinellamatthewj insituparalleltrainingofanalogneuralnetworkusingelectrochemicalrandomaccessmemory
AT salleoalberto insituparalleltrainingofanalogneuralnetworkusingelectrochemicalrandomaccessmemory
AT fullerelliotj insituparalleltrainingofanalogneuralnetworkusingelectrochemicalrandomaccessmemory
AT talinaalec insituparalleltrainingofanalogneuralnetworkusingelectrochemicalrandomaccessmemory