Cargando…

Neural Network Training Acceleration With RRAM-Based Hybrid Synapses

Hardware neural network (HNN) based on analog synapse array excels in accelerating parallel computations. To implement an energy-efficient HNN with high accuracy, high-precision synaptic devices and fully-parallel array operations are essential. However, existing resistive memory (RRAM) devices can...

Descripción completa

Detalles Bibliográficos
Autores principales: Choi, Wooseok, Kwak, Myonghoon, Kim, Seyoung, Hwang, Hyunsang
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Frontiers Media S.A. 2021
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8264206/
https://www.ncbi.nlm.nih.gov/pubmed/34248492
http://dx.doi.org/10.3389/fnins.2021.690418
_version_ 1783719501500514304
author Choi, Wooseok
Kwak, Myonghoon
Kim, Seyoung
Hwang, Hyunsang
author_facet Choi, Wooseok
Kwak, Myonghoon
Kim, Seyoung
Hwang, Hyunsang
author_sort Choi, Wooseok
collection PubMed
description Hardware neural network (HNN) based on analog synapse array excels in accelerating parallel computations. To implement an energy-efficient HNN with high accuracy, high-precision synaptic devices and fully-parallel array operations are essential. However, existing resistive memory (RRAM) devices can represent only a finite number of conductance states. Recently, there have been attempts to compensate device nonidealities using multiple devices per weight. While there is a benefit, it is difficult to apply the existing parallel updating scheme to the synaptic units, which significantly increases updating process’s cost in terms of computation speed, energy, and complexity. Here, we propose an RRAM-based hybrid synaptic unit consisting of a “big” synapse and a “small” synapse, and a related training method. Unlike previous attempts, array-wise fully-parallel learning is possible with our proposed architecture with a simple array selection logic. To experimentally verify the hybrid synapse, we exploit Mo/TiO(x) RRAM, which shows promising synaptic properties and areal dependency of conductance precision. By realizing the intrinsic gain via proportionally scaled device area, we show that the big and small synapse can be implemented at the device-level without modifications to the operational scheme. Through neural network simulations, we confirm that RRAM-based hybrid synapse with the proposed learning method achieves maximum accuracy of 97 %, comparable to floating-point implementation (97.92%) of the software even with only 50 conductance states in each device. Our results promise training efficiency and inference accuracy by using existing RRAM devices.
format Online
Article
Text
id pubmed-8264206
institution National Center for Biotechnology Information
language English
publishDate 2021
publisher Frontiers Media S.A.
record_format MEDLINE/PubMed
spelling pubmed-82642062021-07-09 Neural Network Training Acceleration With RRAM-Based Hybrid Synapses Choi, Wooseok Kwak, Myonghoon Kim, Seyoung Hwang, Hyunsang Front Neurosci Neuroscience Hardware neural network (HNN) based on analog synapse array excels in accelerating parallel computations. To implement an energy-efficient HNN with high accuracy, high-precision synaptic devices and fully-parallel array operations are essential. However, existing resistive memory (RRAM) devices can represent only a finite number of conductance states. Recently, there have been attempts to compensate device nonidealities using multiple devices per weight. While there is a benefit, it is difficult to apply the existing parallel updating scheme to the synaptic units, which significantly increases updating process’s cost in terms of computation speed, energy, and complexity. Here, we propose an RRAM-based hybrid synaptic unit consisting of a “big” synapse and a “small” synapse, and a related training method. Unlike previous attempts, array-wise fully-parallel learning is possible with our proposed architecture with a simple array selection logic. To experimentally verify the hybrid synapse, we exploit Mo/TiO(x) RRAM, which shows promising synaptic properties and areal dependency of conductance precision. By realizing the intrinsic gain via proportionally scaled device area, we show that the big and small synapse can be implemented at the device-level without modifications to the operational scheme. Through neural network simulations, we confirm that RRAM-based hybrid synapse with the proposed learning method achieves maximum accuracy of 97 %, comparable to floating-point implementation (97.92%) of the software even with only 50 conductance states in each device. Our results promise training efficiency and inference accuracy by using existing RRAM devices. Frontiers Media S.A. 2021-06-24 /pmc/articles/PMC8264206/ /pubmed/34248492 http://dx.doi.org/10.3389/fnins.2021.690418 Text en Copyright © 2021 Choi, Kwak, Kim and Hwang. https://creativecommons.org/licenses/by/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
spellingShingle Neuroscience
Choi, Wooseok
Kwak, Myonghoon
Kim, Seyoung
Hwang, Hyunsang
Neural Network Training Acceleration With RRAM-Based Hybrid Synapses
title Neural Network Training Acceleration With RRAM-Based Hybrid Synapses
title_full Neural Network Training Acceleration With RRAM-Based Hybrid Synapses
title_fullStr Neural Network Training Acceleration With RRAM-Based Hybrid Synapses
title_full_unstemmed Neural Network Training Acceleration With RRAM-Based Hybrid Synapses
title_short Neural Network Training Acceleration With RRAM-Based Hybrid Synapses
title_sort neural network training acceleration with rram-based hybrid synapses
topic Neuroscience
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8264206/
https://www.ncbi.nlm.nih.gov/pubmed/34248492
http://dx.doi.org/10.3389/fnins.2021.690418
work_keys_str_mv AT choiwooseok neuralnetworktrainingaccelerationwithrrambasedhybridsynapses
AT kwakmyonghoon neuralnetworktrainingaccelerationwithrrambasedhybridsynapses
AT kimseyoung neuralnetworktrainingaccelerationwithrrambasedhybridsynapses
AT hwanghyunsang neuralnetworktrainingaccelerationwithrrambasedhybridsynapses