Cargando…
Ultra-High-Speed Accelerator Architecture for Convolutional Neural Network Based on Processing-in-Memory Using Resistive Random Access Memory
Processing-in-Memory (PIM) based on Resistive Random Access Memory (RRAM) is an emerging acceleration architecture for artificial neural networks. This paper proposes an RRAM PIM accelerator architecture that does not use Analog-to-Digital Converters (ADCs) and Digital-to-Analog Converters (DACs). A...
Autores principales: | , , , , , , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
MDPI
2023
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10007456/ https://www.ncbi.nlm.nih.gov/pubmed/36904605 http://dx.doi.org/10.3390/s23052401 |
_version_ | 1784905526012805120 |
---|---|
author | Wang, Hongzhe Wang, Junjie Hu, Hao Li, Guo Hu, Shaogang Yu, Qi Liu, Zhen Chen, Tupei Zhou, Shijie Liu, Yang |
author_facet | Wang, Hongzhe Wang, Junjie Hu, Hao Li, Guo Hu, Shaogang Yu, Qi Liu, Zhen Chen, Tupei Zhou, Shijie Liu, Yang |
author_sort | Wang, Hongzhe |
collection | PubMed |
description | Processing-in-Memory (PIM) based on Resistive Random Access Memory (RRAM) is an emerging acceleration architecture for artificial neural networks. This paper proposes an RRAM PIM accelerator architecture that does not use Analog-to-Digital Converters (ADCs) and Digital-to-Analog Converters (DACs). Additionally, no additional memory usage is required to avoid the need for a large amount of data transportation in convolution computation. Partial quantization is introduced to reduce the accuracy loss. The proposed architecture can substantially reduce the overall power consumption and accelerate computation. The simulation results show that the image recognition rate for the Convolutional Neural Network (CNN) algorithm can reach 284 frames per second at 50 MHz using this architecture. The accuracy of the partial quantization remains almost unchanged compared to the algorithm without quantization. |
format | Online Article Text |
id | pubmed-10007456 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2023 |
publisher | MDPI |
record_format | MEDLINE/PubMed |
spelling | pubmed-100074562023-03-12 Ultra-High-Speed Accelerator Architecture for Convolutional Neural Network Based on Processing-in-Memory Using Resistive Random Access Memory Wang, Hongzhe Wang, Junjie Hu, Hao Li, Guo Hu, Shaogang Yu, Qi Liu, Zhen Chen, Tupei Zhou, Shijie Liu, Yang Sensors (Basel) Communication Processing-in-Memory (PIM) based on Resistive Random Access Memory (RRAM) is an emerging acceleration architecture for artificial neural networks. This paper proposes an RRAM PIM accelerator architecture that does not use Analog-to-Digital Converters (ADCs) and Digital-to-Analog Converters (DACs). Additionally, no additional memory usage is required to avoid the need for a large amount of data transportation in convolution computation. Partial quantization is introduced to reduce the accuracy loss. The proposed architecture can substantially reduce the overall power consumption and accelerate computation. The simulation results show that the image recognition rate for the Convolutional Neural Network (CNN) algorithm can reach 284 frames per second at 50 MHz using this architecture. The accuracy of the partial quantization remains almost unchanged compared to the algorithm without quantization. MDPI 2023-02-21 /pmc/articles/PMC10007456/ /pubmed/36904605 http://dx.doi.org/10.3390/s23052401 Text en © 2023 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). |
spellingShingle | Communication Wang, Hongzhe Wang, Junjie Hu, Hao Li, Guo Hu, Shaogang Yu, Qi Liu, Zhen Chen, Tupei Zhou, Shijie Liu, Yang Ultra-High-Speed Accelerator Architecture for Convolutional Neural Network Based on Processing-in-Memory Using Resistive Random Access Memory |
title | Ultra-High-Speed Accelerator Architecture for Convolutional Neural Network Based on Processing-in-Memory Using Resistive Random Access Memory |
title_full | Ultra-High-Speed Accelerator Architecture for Convolutional Neural Network Based on Processing-in-Memory Using Resistive Random Access Memory |
title_fullStr | Ultra-High-Speed Accelerator Architecture for Convolutional Neural Network Based on Processing-in-Memory Using Resistive Random Access Memory |
title_full_unstemmed | Ultra-High-Speed Accelerator Architecture for Convolutional Neural Network Based on Processing-in-Memory Using Resistive Random Access Memory |
title_short | Ultra-High-Speed Accelerator Architecture for Convolutional Neural Network Based on Processing-in-Memory Using Resistive Random Access Memory |
title_sort | ultra-high-speed accelerator architecture for convolutional neural network based on processing-in-memory using resistive random access memory |
topic | Communication |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10007456/ https://www.ncbi.nlm.nih.gov/pubmed/36904605 http://dx.doi.org/10.3390/s23052401 |
work_keys_str_mv | AT wanghongzhe ultrahighspeedacceleratorarchitectureforconvolutionalneuralnetworkbasedonprocessinginmemoryusingresistiverandomaccessmemory AT wangjunjie ultrahighspeedacceleratorarchitectureforconvolutionalneuralnetworkbasedonprocessinginmemoryusingresistiverandomaccessmemory AT huhao ultrahighspeedacceleratorarchitectureforconvolutionalneuralnetworkbasedonprocessinginmemoryusingresistiverandomaccessmemory AT liguo ultrahighspeedacceleratorarchitectureforconvolutionalneuralnetworkbasedonprocessinginmemoryusingresistiverandomaccessmemory AT hushaogang ultrahighspeedacceleratorarchitectureforconvolutionalneuralnetworkbasedonprocessinginmemoryusingresistiverandomaccessmemory AT yuqi ultrahighspeedacceleratorarchitectureforconvolutionalneuralnetworkbasedonprocessinginmemoryusingresistiverandomaccessmemory AT liuzhen ultrahighspeedacceleratorarchitectureforconvolutionalneuralnetworkbasedonprocessinginmemoryusingresistiverandomaccessmemory AT chentupei ultrahighspeedacceleratorarchitectureforconvolutionalneuralnetworkbasedonprocessinginmemoryusingresistiverandomaccessmemory AT zhoushijie ultrahighspeedacceleratorarchitectureforconvolutionalneuralnetworkbasedonprocessinginmemoryusingresistiverandomaccessmemory AT liuyang ultrahighspeedacceleratorarchitectureforconvolutionalneuralnetworkbasedonprocessinginmemoryusingresistiverandomaccessmemory |