Cargando…

Asymmetrical Training Scheme of Binary-Memristor-Crossbar-Based Neural Networks for Energy-Efficient Edge-Computing Nanoscale Systems

For realizing neural networks with binary memristor crossbars, memristors should be programmed by high-resistance state (HRS) and low-resistance state (LRS), according to the training algorithms like backpropagation. Unfortunately, it takes a very long time and consumes a large amount of power in tr...

Descripción completa

Detalles Bibliográficos
Autores principales: Pham, Khoa Van, Tran, Son Bao, Nguyen, Tien Van, Min, Kyeong-Sik
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2019
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6412588/
https://www.ncbi.nlm.nih.gov/pubmed/30791655
http://dx.doi.org/10.3390/mi10020141
_version_ 1783402640363749376
author Pham, Khoa Van
Tran, Son Bao
Nguyen, Tien Van
Min, Kyeong-Sik
author_facet Pham, Khoa Van
Tran, Son Bao
Nguyen, Tien Van
Min, Kyeong-Sik
author_sort Pham, Khoa Van
collection PubMed
description For realizing neural networks with binary memristor crossbars, memristors should be programmed by high-resistance state (HRS) and low-resistance state (LRS), according to the training algorithms like backpropagation. Unfortunately, it takes a very long time and consumes a large amount of power in training the memristor crossbar, because the program-verify scheme of memristor-programming is based on the incremental programming pulses, where many programming and verifying pulses are repeated until the target conductance. Thus, this reduces the programming time and power is very essential for energy-efficient and fast training of memristor networks. In this paper, we compared four different programming schemes, which are F-F, C-F, F-C, and C-C, respectively. C-C means both HRS and LRS are coarse-programmed. C-F has the coarse-programmed HRS and fine LRS, respectively. F-C is vice versa of C-F. In F-F, both HRS and LRS are fine-programmed. Comparing the error-energy products among the four schemes, C-F shows the minimum error with the minimum energy consumption. The asymmetrical coarse HRS and fine LRS can reduce the time and energy during the crossbar training significantly, because only LRS is fine-programmed. Moreover, the asymmetrical C-F can maintain the network’s error as small as F-F, which is due to the coarse-programmed HRS that slightly degrades the error.
format Online
Article
Text
id pubmed-6412588
institution National Center for Biotechnology Information
language English
publishDate 2019
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-64125882019-04-09 Asymmetrical Training Scheme of Binary-Memristor-Crossbar-Based Neural Networks for Energy-Efficient Edge-Computing Nanoscale Systems Pham, Khoa Van Tran, Son Bao Nguyen, Tien Van Min, Kyeong-Sik Micromachines (Basel) Article For realizing neural networks with binary memristor crossbars, memristors should be programmed by high-resistance state (HRS) and low-resistance state (LRS), according to the training algorithms like backpropagation. Unfortunately, it takes a very long time and consumes a large amount of power in training the memristor crossbar, because the program-verify scheme of memristor-programming is based on the incremental programming pulses, where many programming and verifying pulses are repeated until the target conductance. Thus, this reduces the programming time and power is very essential for energy-efficient and fast training of memristor networks. In this paper, we compared four different programming schemes, which are F-F, C-F, F-C, and C-C, respectively. C-C means both HRS and LRS are coarse-programmed. C-F has the coarse-programmed HRS and fine LRS, respectively. F-C is vice versa of C-F. In F-F, both HRS and LRS are fine-programmed. Comparing the error-energy products among the four schemes, C-F shows the minimum error with the minimum energy consumption. The asymmetrical coarse HRS and fine LRS can reduce the time and energy during the crossbar training significantly, because only LRS is fine-programmed. Moreover, the asymmetrical C-F can maintain the network’s error as small as F-F, which is due to the coarse-programmed HRS that slightly degrades the error. MDPI 2019-02-20 /pmc/articles/PMC6412588/ /pubmed/30791655 http://dx.doi.org/10.3390/mi10020141 Text en © 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Pham, Khoa Van
Tran, Son Bao
Nguyen, Tien Van
Min, Kyeong-Sik
Asymmetrical Training Scheme of Binary-Memristor-Crossbar-Based Neural Networks for Energy-Efficient Edge-Computing Nanoscale Systems
title Asymmetrical Training Scheme of Binary-Memristor-Crossbar-Based Neural Networks for Energy-Efficient Edge-Computing Nanoscale Systems
title_full Asymmetrical Training Scheme of Binary-Memristor-Crossbar-Based Neural Networks for Energy-Efficient Edge-Computing Nanoscale Systems
title_fullStr Asymmetrical Training Scheme of Binary-Memristor-Crossbar-Based Neural Networks for Energy-Efficient Edge-Computing Nanoscale Systems
title_full_unstemmed Asymmetrical Training Scheme of Binary-Memristor-Crossbar-Based Neural Networks for Energy-Efficient Edge-Computing Nanoscale Systems
title_short Asymmetrical Training Scheme of Binary-Memristor-Crossbar-Based Neural Networks for Energy-Efficient Edge-Computing Nanoscale Systems
title_sort asymmetrical training scheme of binary-memristor-crossbar-based neural networks for energy-efficient edge-computing nanoscale systems
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6412588/
https://www.ncbi.nlm.nih.gov/pubmed/30791655
http://dx.doi.org/10.3390/mi10020141
work_keys_str_mv AT phamkhoavan asymmetricaltrainingschemeofbinarymemristorcrossbarbasedneuralnetworksforenergyefficientedgecomputingnanoscalesystems
AT transonbao asymmetricaltrainingschemeofbinarymemristorcrossbarbasedneuralnetworksforenergyefficientedgecomputingnanoscalesystems
AT nguyentienvan asymmetricaltrainingschemeofbinarymemristorcrossbarbasedneuralnetworksforenergyefficientedgecomputingnanoscalesystems
AT minkyeongsik asymmetricaltrainingschemeofbinarymemristorcrossbarbasedneuralnetworksforenergyefficientedgecomputingnanoscalesystems