Cargando…

Bayesian continual learning via spiking neural networks

Among the main features of biological intelligence are energy efficiency, capacity for continual adaptation, and risk management via uncertainty quantification. Neuromorphic engineering has been thus far mostly driven by the goal of implementing energy-efficient machines that take inspiration from t...

Descripción completa

Detalles Bibliográficos
Autores principales: Skatchkovsky, Nicolas, Jang, Hyeryung, Simeone, Osvaldo
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Frontiers Media S.A. 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9708898/
https://www.ncbi.nlm.nih.gov/pubmed/36465962
http://dx.doi.org/10.3389/fncom.2022.1037976
_version_ 1784841042203246592
author Skatchkovsky, Nicolas
Jang, Hyeryung
Simeone, Osvaldo
author_facet Skatchkovsky, Nicolas
Jang, Hyeryung
Simeone, Osvaldo
author_sort Skatchkovsky, Nicolas
collection PubMed
description Among the main features of biological intelligence are energy efficiency, capacity for continual adaptation, and risk management via uncertainty quantification. Neuromorphic engineering has been thus far mostly driven by the goal of implementing energy-efficient machines that take inspiration from the time-based computing paradigm of biological brains. In this paper, we take steps toward the design of neuromorphic systems that are capable of adaptation to changing learning tasks, while producing well-calibrated uncertainty quantification estimates. To this end, we derive online learning rules for spiking neural networks (SNNs) within a Bayesian continual learning framework. In it, each synaptic weight is represented by parameters that quantify the current epistemic uncertainty resulting from prior knowledge and observed data. The proposed online rules update the distribution parameters in a streaming fashion as data are observed. We instantiate the proposed approach for both real-valued and binary synaptic weights. Experimental results using Intel's Lava platform show the merits of Bayesian over frequentist learning in terms of capacity for adaptation and uncertainty quantification.
format Online
Article
Text
id pubmed-9708898
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher Frontiers Media S.A.
record_format MEDLINE/PubMed
spelling pubmed-97088982022-12-01 Bayesian continual learning via spiking neural networks Skatchkovsky, Nicolas Jang, Hyeryung Simeone, Osvaldo Front Comput Neurosci Neuroscience Among the main features of biological intelligence are energy efficiency, capacity for continual adaptation, and risk management via uncertainty quantification. Neuromorphic engineering has been thus far mostly driven by the goal of implementing energy-efficient machines that take inspiration from the time-based computing paradigm of biological brains. In this paper, we take steps toward the design of neuromorphic systems that are capable of adaptation to changing learning tasks, while producing well-calibrated uncertainty quantification estimates. To this end, we derive online learning rules for spiking neural networks (SNNs) within a Bayesian continual learning framework. In it, each synaptic weight is represented by parameters that quantify the current epistemic uncertainty resulting from prior knowledge and observed data. The proposed online rules update the distribution parameters in a streaming fashion as data are observed. We instantiate the proposed approach for both real-valued and binary synaptic weights. Experimental results using Intel's Lava platform show the merits of Bayesian over frequentist learning in terms of capacity for adaptation and uncertainty quantification. Frontiers Media S.A. 2022-11-16 /pmc/articles/PMC9708898/ /pubmed/36465962 http://dx.doi.org/10.3389/fncom.2022.1037976 Text en Copyright © 2022 Skatchkovsky, Jang and Simeone. https://creativecommons.org/licenses/by/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
spellingShingle Neuroscience
Skatchkovsky, Nicolas
Jang, Hyeryung
Simeone, Osvaldo
Bayesian continual learning via spiking neural networks
title Bayesian continual learning via spiking neural networks
title_full Bayesian continual learning via spiking neural networks
title_fullStr Bayesian continual learning via spiking neural networks
title_full_unstemmed Bayesian continual learning via spiking neural networks
title_short Bayesian continual learning via spiking neural networks
title_sort bayesian continual learning via spiking neural networks
topic Neuroscience
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9708898/
https://www.ncbi.nlm.nih.gov/pubmed/36465962
http://dx.doi.org/10.3389/fncom.2022.1037976
work_keys_str_mv AT skatchkovskynicolas bayesiancontinuallearningviaspikingneuralnetworks
AT janghyeryung bayesiancontinuallearningviaspikingneuralnetworks
AT simeoneosvaldo bayesiancontinuallearningviaspikingneuralnetworks