Cargando…

A Cost-Efficient High-Speed VLSI Architecture for Spiking Convolutional Neural Network Inference Using Time-Step Binary Spike Maps

Neuromorphic hardware systems have been gaining ever-increasing focus in many embedded applications as they use a brain-inspired, energy-efficient spiking neural network (SNN) model that closely mimics the human cortex mechanism by communicating and processing sensory information via spatiotemporall...

Descripción completa

Detalles Bibliográficos
Autores principales: Zhang, Ling, Yang, Jing, Shi, Cong, Lin, Yingcheng, He, Wei, Zhou, Xichuan, Yang, Xu, Liu, Liyuan, Wu, Nanjian
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2021
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8471769/
https://www.ncbi.nlm.nih.gov/pubmed/34577214
http://dx.doi.org/10.3390/s21186006
_version_ 1784574553164349440
author Zhang, Ling
Yang, Jing
Shi, Cong
Lin, Yingcheng
He, Wei
Zhou, Xichuan
Yang, Xu
Liu, Liyuan
Wu, Nanjian
author_facet Zhang, Ling
Yang, Jing
Shi, Cong
Lin, Yingcheng
He, Wei
Zhou, Xichuan
Yang, Xu
Liu, Liyuan
Wu, Nanjian
author_sort Zhang, Ling
collection PubMed
description Neuromorphic hardware systems have been gaining ever-increasing focus in many embedded applications as they use a brain-inspired, energy-efficient spiking neural network (SNN) model that closely mimics the human cortex mechanism by communicating and processing sensory information via spatiotemporally sparse spikes. In this paper, we fully leverage the characteristics of spiking convolution neural network (SCNN), and propose a scalable, cost-efficient, and high-speed VLSI architecture to accelerate deep SCNN inference for real-time low-cost embedded scenarios. We leverage the snapshot of binary spike maps at each time-step, to decompose the SCNN operations into a series of regular and simple time-step CNN-like processing to reduce hardware resource consumption. Moreover, our hardware architecture achieves high throughput by employing a pixel stream processing mechanism and fine-grained data pipelines. Our Zynq-7045 FPGA prototype reached a high processing speed of 1250 frames/s and high recognition accuracies on the MNIST and Fashion-MNIST image datasets, demonstrating the plausibility of our SCNN hardware architecture for many embedded applications.
format Online
Article
Text
id pubmed-8471769
institution National Center for Biotechnology Information
language English
publishDate 2021
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-84717692021-09-28 A Cost-Efficient High-Speed VLSI Architecture for Spiking Convolutional Neural Network Inference Using Time-Step Binary Spike Maps Zhang, Ling Yang, Jing Shi, Cong Lin, Yingcheng He, Wei Zhou, Xichuan Yang, Xu Liu, Liyuan Wu, Nanjian Sensors (Basel) Article Neuromorphic hardware systems have been gaining ever-increasing focus in many embedded applications as they use a brain-inspired, energy-efficient spiking neural network (SNN) model that closely mimics the human cortex mechanism by communicating and processing sensory information via spatiotemporally sparse spikes. In this paper, we fully leverage the characteristics of spiking convolution neural network (SCNN), and propose a scalable, cost-efficient, and high-speed VLSI architecture to accelerate deep SCNN inference for real-time low-cost embedded scenarios. We leverage the snapshot of binary spike maps at each time-step, to decompose the SCNN operations into a series of regular and simple time-step CNN-like processing to reduce hardware resource consumption. Moreover, our hardware architecture achieves high throughput by employing a pixel stream processing mechanism and fine-grained data pipelines. Our Zynq-7045 FPGA prototype reached a high processing speed of 1250 frames/s and high recognition accuracies on the MNIST and Fashion-MNIST image datasets, demonstrating the plausibility of our SCNN hardware architecture for many embedded applications. MDPI 2021-09-08 /pmc/articles/PMC8471769/ /pubmed/34577214 http://dx.doi.org/10.3390/s21186006 Text en © 2021 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Zhang, Ling
Yang, Jing
Shi, Cong
Lin, Yingcheng
He, Wei
Zhou, Xichuan
Yang, Xu
Liu, Liyuan
Wu, Nanjian
A Cost-Efficient High-Speed VLSI Architecture for Spiking Convolutional Neural Network Inference Using Time-Step Binary Spike Maps
title A Cost-Efficient High-Speed VLSI Architecture for Spiking Convolutional Neural Network Inference Using Time-Step Binary Spike Maps
title_full A Cost-Efficient High-Speed VLSI Architecture for Spiking Convolutional Neural Network Inference Using Time-Step Binary Spike Maps
title_fullStr A Cost-Efficient High-Speed VLSI Architecture for Spiking Convolutional Neural Network Inference Using Time-Step Binary Spike Maps
title_full_unstemmed A Cost-Efficient High-Speed VLSI Architecture for Spiking Convolutional Neural Network Inference Using Time-Step Binary Spike Maps
title_short A Cost-Efficient High-Speed VLSI Architecture for Spiking Convolutional Neural Network Inference Using Time-Step Binary Spike Maps
title_sort cost-efficient high-speed vlsi architecture for spiking convolutional neural network inference using time-step binary spike maps
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8471769/
https://www.ncbi.nlm.nih.gov/pubmed/34577214
http://dx.doi.org/10.3390/s21186006
work_keys_str_mv AT zhangling acostefficienthighspeedvlsiarchitectureforspikingconvolutionalneuralnetworkinferenceusingtimestepbinaryspikemaps
AT yangjing acostefficienthighspeedvlsiarchitectureforspikingconvolutionalneuralnetworkinferenceusingtimestepbinaryspikemaps
AT shicong acostefficienthighspeedvlsiarchitectureforspikingconvolutionalneuralnetworkinferenceusingtimestepbinaryspikemaps
AT linyingcheng acostefficienthighspeedvlsiarchitectureforspikingconvolutionalneuralnetworkinferenceusingtimestepbinaryspikemaps
AT hewei acostefficienthighspeedvlsiarchitectureforspikingconvolutionalneuralnetworkinferenceusingtimestepbinaryspikemaps
AT zhouxichuan acostefficienthighspeedvlsiarchitectureforspikingconvolutionalneuralnetworkinferenceusingtimestepbinaryspikemaps
AT yangxu acostefficienthighspeedvlsiarchitectureforspikingconvolutionalneuralnetworkinferenceusingtimestepbinaryspikemaps
AT liuliyuan acostefficienthighspeedvlsiarchitectureforspikingconvolutionalneuralnetworkinferenceusingtimestepbinaryspikemaps
AT wunanjian acostefficienthighspeedvlsiarchitectureforspikingconvolutionalneuralnetworkinferenceusingtimestepbinaryspikemaps
AT zhangling costefficienthighspeedvlsiarchitectureforspikingconvolutionalneuralnetworkinferenceusingtimestepbinaryspikemaps
AT yangjing costefficienthighspeedvlsiarchitectureforspikingconvolutionalneuralnetworkinferenceusingtimestepbinaryspikemaps
AT shicong costefficienthighspeedvlsiarchitectureforspikingconvolutionalneuralnetworkinferenceusingtimestepbinaryspikemaps
AT linyingcheng costefficienthighspeedvlsiarchitectureforspikingconvolutionalneuralnetworkinferenceusingtimestepbinaryspikemaps
AT hewei costefficienthighspeedvlsiarchitectureforspikingconvolutionalneuralnetworkinferenceusingtimestepbinaryspikemaps
AT zhouxichuan costefficienthighspeedvlsiarchitectureforspikingconvolutionalneuralnetworkinferenceusingtimestepbinaryspikemaps
AT yangxu costefficienthighspeedvlsiarchitectureforspikingconvolutionalneuralnetworkinferenceusingtimestepbinaryspikemaps
AT liuliyuan costefficienthighspeedvlsiarchitectureforspikingconvolutionalneuralnetworkinferenceusingtimestepbinaryspikemaps
AT wunanjian costefficienthighspeedvlsiarchitectureforspikingconvolutionalneuralnetworkinferenceusingtimestepbinaryspikemaps