Cargando…

A TTFS-based energy and utilization efficient neuromorphic CNN accelerator

Spiking neural networks (SNNs), which are a form of neuromorphic, brain-inspired AI, have the potential to be a power-efficient alternative to artificial neural networks (ANNs). Spikes that occur in SNN systems, also known as activations, tend to be extremely sparse, and low in number. This minimize...

Descripción completa

Detalles Bibliográficos
Autores principales: Yu, Miao, Xiang, Tingting, P., Srivatsa, Chu, Kyle Timothy Ng, Amornpaisannon, Burin, Tavva, Yaswanth, Miriyala, Venkata Pavan Kumar, Carlson, Trevor E.
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Frontiers Media S.A. 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10198466/
https://www.ncbi.nlm.nih.gov/pubmed/37214405
http://dx.doi.org/10.3389/fnins.2023.1121592
_version_ 1785044742956908544
author Yu, Miao
Xiang, Tingting
P., Srivatsa
Chu, Kyle Timothy Ng
Amornpaisannon, Burin
Tavva, Yaswanth
Miriyala, Venkata Pavan Kumar
Carlson, Trevor E.
author_facet Yu, Miao
Xiang, Tingting
P., Srivatsa
Chu, Kyle Timothy Ng
Amornpaisannon, Burin
Tavva, Yaswanth
Miriyala, Venkata Pavan Kumar
Carlson, Trevor E.
author_sort Yu, Miao
collection PubMed
description Spiking neural networks (SNNs), which are a form of neuromorphic, brain-inspired AI, have the potential to be a power-efficient alternative to artificial neural networks (ANNs). Spikes that occur in SNN systems, also known as activations, tend to be extremely sparse, and low in number. This minimizes the number of data accesses typically needed for processing. In addition, SNN systems are typically designed to use addition operations which consume much less energy than the typical multiply and accumulate operations used in DNN systems. The vast majority of neuromorphic hardware designs support rate-based SNNs, where the information is encoded by spike rates. Generally, rate-based SNNs can be inefficient as a large number of spikes will be transmitted and processed during inference. One coding scheme that has the potential to improve efficiency is the time-to-first-spike (TTFS) coding, where the information isn't presented through the frequency of spikes, but instead through the relative spike arrival time. In TTFS-based SNNs, each neuron can only spike once during the entire inference process, and this results in high sparsity. The activation sparsity of TTFS-based SNNs is higher than rate-based SNNs, but TTFS-based SNNs have yet to achieve the same accuracy as rate-based SNNs. In this work, we propose two key improvements for TTFS-based SNN systems: (1) a novel optimization algorithm to improve the accuracy of TTFS-based SNNs and (2) a novel hardware accelerator for TTFS-based SNNs that uses a scalable and low-power design. Our work in TTFS coding and training improves the accuracy of TTFS-based SNNs to achieve state-of-the-art results on the MNIST and Fashion-MNIST datasets. Meanwhile, our work reduces the power consumption by at least 2.4×, 25.9×, and 38.4× over the state-of-the-art neuromorphic hardware on MNIST, Fashion-MNIST, and CIFAR10, respectively.
format Online
Article
Text
id pubmed-10198466
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher Frontiers Media S.A.
record_format MEDLINE/PubMed
spelling pubmed-101984662023-05-20 A TTFS-based energy and utilization efficient neuromorphic CNN accelerator Yu, Miao Xiang, Tingting P., Srivatsa Chu, Kyle Timothy Ng Amornpaisannon, Burin Tavva, Yaswanth Miriyala, Venkata Pavan Kumar Carlson, Trevor E. Front Neurosci Neuroscience Spiking neural networks (SNNs), which are a form of neuromorphic, brain-inspired AI, have the potential to be a power-efficient alternative to artificial neural networks (ANNs). Spikes that occur in SNN systems, also known as activations, tend to be extremely sparse, and low in number. This minimizes the number of data accesses typically needed for processing. In addition, SNN systems are typically designed to use addition operations which consume much less energy than the typical multiply and accumulate operations used in DNN systems. The vast majority of neuromorphic hardware designs support rate-based SNNs, where the information is encoded by spike rates. Generally, rate-based SNNs can be inefficient as a large number of spikes will be transmitted and processed during inference. One coding scheme that has the potential to improve efficiency is the time-to-first-spike (TTFS) coding, where the information isn't presented through the frequency of spikes, but instead through the relative spike arrival time. In TTFS-based SNNs, each neuron can only spike once during the entire inference process, and this results in high sparsity. The activation sparsity of TTFS-based SNNs is higher than rate-based SNNs, but TTFS-based SNNs have yet to achieve the same accuracy as rate-based SNNs. In this work, we propose two key improvements for TTFS-based SNN systems: (1) a novel optimization algorithm to improve the accuracy of TTFS-based SNNs and (2) a novel hardware accelerator for TTFS-based SNNs that uses a scalable and low-power design. Our work in TTFS coding and training improves the accuracy of TTFS-based SNNs to achieve state-of-the-art results on the MNIST and Fashion-MNIST datasets. Meanwhile, our work reduces the power consumption by at least 2.4×, 25.9×, and 38.4× over the state-of-the-art neuromorphic hardware on MNIST, Fashion-MNIST, and CIFAR10, respectively. Frontiers Media S.A. 2023-05-05 /pmc/articles/PMC10198466/ /pubmed/37214405 http://dx.doi.org/10.3389/fnins.2023.1121592 Text en Copyright © 2023 Yu, Xiang, P., Chu, Amornpaisannon, Tavva, Miriyala and Carlson. https://creativecommons.org/licenses/by/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
spellingShingle Neuroscience
Yu, Miao
Xiang, Tingting
P., Srivatsa
Chu, Kyle Timothy Ng
Amornpaisannon, Burin
Tavva, Yaswanth
Miriyala, Venkata Pavan Kumar
Carlson, Trevor E.
A TTFS-based energy and utilization efficient neuromorphic CNN accelerator
title A TTFS-based energy and utilization efficient neuromorphic CNN accelerator
title_full A TTFS-based energy and utilization efficient neuromorphic CNN accelerator
title_fullStr A TTFS-based energy and utilization efficient neuromorphic CNN accelerator
title_full_unstemmed A TTFS-based energy and utilization efficient neuromorphic CNN accelerator
title_short A TTFS-based energy and utilization efficient neuromorphic CNN accelerator
title_sort ttfs-based energy and utilization efficient neuromorphic cnn accelerator
topic Neuroscience
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10198466/
https://www.ncbi.nlm.nih.gov/pubmed/37214405
http://dx.doi.org/10.3389/fnins.2023.1121592
work_keys_str_mv AT yumiao attfsbasedenergyandutilizationefficientneuromorphiccnnaccelerator
AT xiangtingting attfsbasedenergyandutilizationefficientneuromorphiccnnaccelerator
AT psrivatsa attfsbasedenergyandutilizationefficientneuromorphiccnnaccelerator
AT chukyletimothyng attfsbasedenergyandutilizationefficientneuromorphiccnnaccelerator
AT amornpaisannonburin attfsbasedenergyandutilizationefficientneuromorphiccnnaccelerator
AT tavvayaswanth attfsbasedenergyandutilizationefficientneuromorphiccnnaccelerator
AT miriyalavenkatapavankumar attfsbasedenergyandutilizationefficientneuromorphiccnnaccelerator
AT carlsontrevore attfsbasedenergyandutilizationefficientneuromorphiccnnaccelerator
AT yumiao ttfsbasedenergyandutilizationefficientneuromorphiccnnaccelerator
AT xiangtingting ttfsbasedenergyandutilizationefficientneuromorphiccnnaccelerator
AT psrivatsa ttfsbasedenergyandutilizationefficientneuromorphiccnnaccelerator
AT chukyletimothyng ttfsbasedenergyandutilizationefficientneuromorphiccnnaccelerator
AT amornpaisannonburin ttfsbasedenergyandutilizationefficientneuromorphiccnnaccelerator
AT tavvayaswanth ttfsbasedenergyandutilizationefficientneuromorphiccnnaccelerator
AT miriyalavenkatapavankumar ttfsbasedenergyandutilizationefficientneuromorphiccnnaccelerator
AT carlsontrevore ttfsbasedenergyandutilizationefficientneuromorphiccnnaccelerator