Cargando…
Reducing the computational footprint for real-time BCPNN learning
The implementation of synaptic plasticity in neural simulation or neuromorphic hardware is usually very resource-intensive, often requiring a compromise between efficiency and flexibility. A versatile, but computationally-expensive plasticity mechanism is provided by the Bayesian Confidence Propagat...
Autores principales: | , , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Frontiers Media S.A.
2015
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4302947/ https://www.ncbi.nlm.nih.gov/pubmed/25657618 http://dx.doi.org/10.3389/fnins.2015.00002 |
_version_ | 1782353874032525312 |
---|---|
author | Vogginger, Bernhard Schüffny, René Lansner, Anders Cederström, Love Partzsch, Johannes Höppner, Sebastian |
author_facet | Vogginger, Bernhard Schüffny, René Lansner, Anders Cederström, Love Partzsch, Johannes Höppner, Sebastian |
author_sort | Vogginger, Bernhard |
collection | PubMed |
description | The implementation of synaptic plasticity in neural simulation or neuromorphic hardware is usually very resource-intensive, often requiring a compromise between efficiency and flexibility. A versatile, but computationally-expensive plasticity mechanism is provided by the Bayesian Confidence Propagation Neural Network (BCPNN) paradigm. Building upon Bayesian statistics, and having clear links to biological plasticity processes, the BCPNN learning rule has been applied in many fields, ranging from data classification, associative memory, reward-based learning, probabilistic inference to cortical attractor memory networks. In the spike-based version of this learning rule the pre-, postsynaptic and coincident activity is traced in three low-pass-filtering stages, requiring a total of eight state variables, whose dynamics are typically simulated with the fixed step size Euler method. We derive analytic solutions allowing an efficient event-driven implementation of this learning rule. Further speedup is achieved by first rewriting the model which reduces the number of basic arithmetic operations per update to one half, and second by using look-up tables for the frequently calculated exponential decay. Ultimately, in a typical use case, the simulation using our approach is more than one order of magnitude faster than with the fixed step size Euler method. Aiming for a small memory footprint per BCPNN synapse, we also evaluate the use of fixed-point numbers for the state variables, and assess the number of bits required to achieve same or better accuracy than with the conventional explicit Euler method. All of this will allow a real-time simulation of a reduced cortex model based on BCPNN in high performance computing. More important, with the analytic solution at hand and due to the reduced memory bandwidth, the learning rule can be efficiently implemented in dedicated or existing digital neuromorphic hardware. |
format | Online Article Text |
id | pubmed-4302947 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2015 |
publisher | Frontiers Media S.A. |
record_format | MEDLINE/PubMed |
spelling | pubmed-43029472015-02-05 Reducing the computational footprint for real-time BCPNN learning Vogginger, Bernhard Schüffny, René Lansner, Anders Cederström, Love Partzsch, Johannes Höppner, Sebastian Front Neurosci Neuroscience The implementation of synaptic plasticity in neural simulation or neuromorphic hardware is usually very resource-intensive, often requiring a compromise between efficiency and flexibility. A versatile, but computationally-expensive plasticity mechanism is provided by the Bayesian Confidence Propagation Neural Network (BCPNN) paradigm. Building upon Bayesian statistics, and having clear links to biological plasticity processes, the BCPNN learning rule has been applied in many fields, ranging from data classification, associative memory, reward-based learning, probabilistic inference to cortical attractor memory networks. In the spike-based version of this learning rule the pre-, postsynaptic and coincident activity is traced in three low-pass-filtering stages, requiring a total of eight state variables, whose dynamics are typically simulated with the fixed step size Euler method. We derive analytic solutions allowing an efficient event-driven implementation of this learning rule. Further speedup is achieved by first rewriting the model which reduces the number of basic arithmetic operations per update to one half, and second by using look-up tables for the frequently calculated exponential decay. Ultimately, in a typical use case, the simulation using our approach is more than one order of magnitude faster than with the fixed step size Euler method. Aiming for a small memory footprint per BCPNN synapse, we also evaluate the use of fixed-point numbers for the state variables, and assess the number of bits required to achieve same or better accuracy than with the conventional explicit Euler method. All of this will allow a real-time simulation of a reduced cortex model based on BCPNN in high performance computing. More important, with the analytic solution at hand and due to the reduced memory bandwidth, the learning rule can be efficiently implemented in dedicated or existing digital neuromorphic hardware. Frontiers Media S.A. 2015-01-22 /pmc/articles/PMC4302947/ /pubmed/25657618 http://dx.doi.org/10.3389/fnins.2015.00002 Text en Copyright © 2015 Vogginger, Schüffny, Lansner, Cederström, Partzsch and Höppner. http://creativecommons.org/licenses/by/4.0/ This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms. |
spellingShingle | Neuroscience Vogginger, Bernhard Schüffny, René Lansner, Anders Cederström, Love Partzsch, Johannes Höppner, Sebastian Reducing the computational footprint for real-time BCPNN learning |
title | Reducing the computational footprint for real-time BCPNN learning |
title_full | Reducing the computational footprint for real-time BCPNN learning |
title_fullStr | Reducing the computational footprint for real-time BCPNN learning |
title_full_unstemmed | Reducing the computational footprint for real-time BCPNN learning |
title_short | Reducing the computational footprint for real-time BCPNN learning |
title_sort | reducing the computational footprint for real-time bcpnn learning |
topic | Neuroscience |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4302947/ https://www.ncbi.nlm.nih.gov/pubmed/25657618 http://dx.doi.org/10.3389/fnins.2015.00002 |
work_keys_str_mv | AT voggingerbernhard reducingthecomputationalfootprintforrealtimebcpnnlearning AT schuffnyrene reducingthecomputationalfootprintforrealtimebcpnnlearning AT lansneranders reducingthecomputationalfootprintforrealtimebcpnnlearning AT cederstromlove reducingthecomputationalfootprintforrealtimebcpnnlearning AT partzschjohannes reducingthecomputationalfootprintforrealtimebcpnnlearning AT hoppnersebastian reducingthecomputationalfootprintforrealtimebcpnnlearning |