Cargando…

Supervised Learning in All FeFET-Based Spiking Neural Network: Opportunities and Challenges

The two possible pathways toward artificial intelligence (AI)—(i) neuroscience-oriented neuromorphic computing [like spiking neural network (SNN)] and (ii) computer science driven machine learning (like deep learning) differ widely in their fundamental formalism and coding schemes (Pei et al., 2019)...

Descripción completa

Detalles Bibliográficos
Autores principales: Dutta, Sourav, Schafer, Clemens, Gomez, Jorge, Ni, Kai, Joshi, Siddharth, Datta, Suman
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Frontiers Media S.A. 2020
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7327100/
https://www.ncbi.nlm.nih.gov/pubmed/32670012
http://dx.doi.org/10.3389/fnins.2020.00634
_version_ 1783552472889950208
author Dutta, Sourav
Schafer, Clemens
Gomez, Jorge
Ni, Kai
Joshi, Siddharth
Datta, Suman
author_facet Dutta, Sourav
Schafer, Clemens
Gomez, Jorge
Ni, Kai
Joshi, Siddharth
Datta, Suman
author_sort Dutta, Sourav
collection PubMed
description The two possible pathways toward artificial intelligence (AI)—(i) neuroscience-oriented neuromorphic computing [like spiking neural network (SNN)] and (ii) computer science driven machine learning (like deep learning) differ widely in their fundamental formalism and coding schemes (Pei et al., 2019). Deviating from traditional deep learning approach of relying on neuronal models with static nonlinearities, SNNs attempt to capture brain-like features like computation using spikes. This holds the promise of improving the energy efficiency of the computing platforms. In order to achieve a much higher areal and energy efficiency compared to today’s hardware implementation of SNN, we need to go beyond the traditional route of relying on CMOS-based digital or mixed-signal neuronal circuits and segregation of computation and memory under the von Neumann architecture. Recently, ferroelectric field-effect transistors (FeFETs) are being explored as a promising alternative for building neuromorphic hardware by utilizing their non-volatile nature and rich polarization switching dynamics. In this work, we propose an all FeFET-based SNN hardware that allows low-power spike-based information processing and co-localized memory and computing (a.k.a. in-memory computing). We experimentally demonstrate the essential neuronal and synaptic dynamics in a 28 nm high-K metal gate FeFET technology. Furthermore, drawing inspiration from the traditional machine learning approach of optimizing a cost function to adjust the synaptic weights, we implement a surrogate gradient (SG) learning algorithm on our SNN platform that allows us to perform supervised learning on MNIST dataset. As such, we provide a pathway toward building energy-efficient neuromorphic hardware that can support traditional machine learning algorithms. Finally, we undertake synergistic device-algorithm co-design by accounting for the impacts of device-level variation (stochasticity) and limited bit precision of on-chip synaptic weights (available analog states) on the classification accuracy.
format Online
Article
Text
id pubmed-7327100
institution National Center for Biotechnology Information
language English
publishDate 2020
publisher Frontiers Media S.A.
record_format MEDLINE/PubMed
spelling pubmed-73271002020-07-14 Supervised Learning in All FeFET-Based Spiking Neural Network: Opportunities and Challenges Dutta, Sourav Schafer, Clemens Gomez, Jorge Ni, Kai Joshi, Siddharth Datta, Suman Front Neurosci Neuroscience The two possible pathways toward artificial intelligence (AI)—(i) neuroscience-oriented neuromorphic computing [like spiking neural network (SNN)] and (ii) computer science driven machine learning (like deep learning) differ widely in their fundamental formalism and coding schemes (Pei et al., 2019). Deviating from traditional deep learning approach of relying on neuronal models with static nonlinearities, SNNs attempt to capture brain-like features like computation using spikes. This holds the promise of improving the energy efficiency of the computing platforms. In order to achieve a much higher areal and energy efficiency compared to today’s hardware implementation of SNN, we need to go beyond the traditional route of relying on CMOS-based digital or mixed-signal neuronal circuits and segregation of computation and memory under the von Neumann architecture. Recently, ferroelectric field-effect transistors (FeFETs) are being explored as a promising alternative for building neuromorphic hardware by utilizing their non-volatile nature and rich polarization switching dynamics. In this work, we propose an all FeFET-based SNN hardware that allows low-power spike-based information processing and co-localized memory and computing (a.k.a. in-memory computing). We experimentally demonstrate the essential neuronal and synaptic dynamics in a 28 nm high-K metal gate FeFET technology. Furthermore, drawing inspiration from the traditional machine learning approach of optimizing a cost function to adjust the synaptic weights, we implement a surrogate gradient (SG) learning algorithm on our SNN platform that allows us to perform supervised learning on MNIST dataset. As such, we provide a pathway toward building energy-efficient neuromorphic hardware that can support traditional machine learning algorithms. Finally, we undertake synergistic device-algorithm co-design by accounting for the impacts of device-level variation (stochasticity) and limited bit precision of on-chip synaptic weights (available analog states) on the classification accuracy. Frontiers Media S.A. 2020-06-24 /pmc/articles/PMC7327100/ /pubmed/32670012 http://dx.doi.org/10.3389/fnins.2020.00634 Text en Copyright © 2020 Dutta, Schafer, Gomez, Ni, Joshi and Datta. http://creativecommons.org/licenses/by/4.0/ This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
spellingShingle Neuroscience
Dutta, Sourav
Schafer, Clemens
Gomez, Jorge
Ni, Kai
Joshi, Siddharth
Datta, Suman
Supervised Learning in All FeFET-Based Spiking Neural Network: Opportunities and Challenges
title Supervised Learning in All FeFET-Based Spiking Neural Network: Opportunities and Challenges
title_full Supervised Learning in All FeFET-Based Spiking Neural Network: Opportunities and Challenges
title_fullStr Supervised Learning in All FeFET-Based Spiking Neural Network: Opportunities and Challenges
title_full_unstemmed Supervised Learning in All FeFET-Based Spiking Neural Network: Opportunities and Challenges
title_short Supervised Learning in All FeFET-Based Spiking Neural Network: Opportunities and Challenges
title_sort supervised learning in all fefet-based spiking neural network: opportunities and challenges
topic Neuroscience
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7327100/
https://www.ncbi.nlm.nih.gov/pubmed/32670012
http://dx.doi.org/10.3389/fnins.2020.00634
work_keys_str_mv AT duttasourav supervisedlearninginallfefetbasedspikingneuralnetworkopportunitiesandchallenges
AT schaferclemens supervisedlearninginallfefetbasedspikingneuralnetworkopportunitiesandchallenges
AT gomezjorge supervisedlearninginallfefetbasedspikingneuralnetworkopportunitiesandchallenges
AT nikai supervisedlearninginallfefetbasedspikingneuralnetworkopportunitiesandchallenges
AT joshisiddharth supervisedlearninginallfefetbasedspikingneuralnetworkopportunitiesandchallenges
AT dattasuman supervisedlearninginallfefetbasedspikingneuralnetworkopportunitiesandchallenges