Cargando…
Large-Scale Simulations of Plastic Neural Networks on Neuromorphic Hardware
SpiNNaker is a digital, neuromorphic architecture designed for simulating large-scale spiking neural networks at speeds close to biological real-time. Rather than using bespoke analog or digital hardware, the basic computational unit of a SpiNNaker system is a general-purpose ARM processor, allowing...
Autores principales: | , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Frontiers Media S.A.
2016
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4823276/ https://www.ncbi.nlm.nih.gov/pubmed/27092061 http://dx.doi.org/10.3389/fnana.2016.00037 |
_version_ | 1782425879945674752 |
---|---|
author | Knight, James C. Tully, Philip J. Kaplan, Bernhard A. Lansner, Anders Furber, Steve B. |
author_facet | Knight, James C. Tully, Philip J. Kaplan, Bernhard A. Lansner, Anders Furber, Steve B. |
author_sort | Knight, James C. |
collection | PubMed |
description | SpiNNaker is a digital, neuromorphic architecture designed for simulating large-scale spiking neural networks at speeds close to biological real-time. Rather than using bespoke analog or digital hardware, the basic computational unit of a SpiNNaker system is a general-purpose ARM processor, allowing it to be programmed to simulate a wide variety of neuron and synapse models. This flexibility is particularly valuable in the study of biological plasticity phenomena. A recently proposed learning rule based on the Bayesian Confidence Propagation Neural Network (BCPNN) paradigm offers a generic framework for modeling the interaction of different plasticity mechanisms using spiking neurons. However, it can be computationally expensive to simulate large networks with BCPNN learning since it requires multiple state variables for each synapse, each of which needs to be updated every simulation time-step. We discuss the trade-offs in efficiency and accuracy involved in developing an event-based BCPNN implementation for SpiNNaker based on an analytical solution to the BCPNN equations, and detail the steps taken to fit this within the limited computational and memory resources of the SpiNNaker architecture. We demonstrate this learning rule by learning temporal sequences of neural activity within a recurrent attractor network which we simulate at scales of up to 2.0 × 104 neurons and 5.1 × 107 plastic synapses: the largest plastic neural network ever to be simulated on neuromorphic hardware. We also run a comparable simulation on a Cray XC-30 supercomputer system and find that, if it is to match the run-time of our SpiNNaker simulation, the super computer system uses approximately 45× more power. This suggests that cheaper, more power efficient neuromorphic systems are becoming useful discovery tools in the study of plasticity in large-scale brain models. |
format | Online Article Text |
id | pubmed-4823276 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2016 |
publisher | Frontiers Media S.A. |
record_format | MEDLINE/PubMed |
spelling | pubmed-48232762016-04-18 Large-Scale Simulations of Plastic Neural Networks on Neuromorphic Hardware Knight, James C. Tully, Philip J. Kaplan, Bernhard A. Lansner, Anders Furber, Steve B. Front Neuroanat Neuroscience SpiNNaker is a digital, neuromorphic architecture designed for simulating large-scale spiking neural networks at speeds close to biological real-time. Rather than using bespoke analog or digital hardware, the basic computational unit of a SpiNNaker system is a general-purpose ARM processor, allowing it to be programmed to simulate a wide variety of neuron and synapse models. This flexibility is particularly valuable in the study of biological plasticity phenomena. A recently proposed learning rule based on the Bayesian Confidence Propagation Neural Network (BCPNN) paradigm offers a generic framework for modeling the interaction of different plasticity mechanisms using spiking neurons. However, it can be computationally expensive to simulate large networks with BCPNN learning since it requires multiple state variables for each synapse, each of which needs to be updated every simulation time-step. We discuss the trade-offs in efficiency and accuracy involved in developing an event-based BCPNN implementation for SpiNNaker based on an analytical solution to the BCPNN equations, and detail the steps taken to fit this within the limited computational and memory resources of the SpiNNaker architecture. We demonstrate this learning rule by learning temporal sequences of neural activity within a recurrent attractor network which we simulate at scales of up to 2.0 × 104 neurons and 5.1 × 107 plastic synapses: the largest plastic neural network ever to be simulated on neuromorphic hardware. We also run a comparable simulation on a Cray XC-30 supercomputer system and find that, if it is to match the run-time of our SpiNNaker simulation, the super computer system uses approximately 45× more power. This suggests that cheaper, more power efficient neuromorphic systems are becoming useful discovery tools in the study of plasticity in large-scale brain models. Frontiers Media S.A. 2016-04-07 /pmc/articles/PMC4823276/ /pubmed/27092061 http://dx.doi.org/10.3389/fnana.2016.00037 Text en Copyright © 2016 Knight, Tully, Kaplan, Lansner and Furber. http://creativecommons.org/licenses/by/4.0/ This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms. |
spellingShingle | Neuroscience Knight, James C. Tully, Philip J. Kaplan, Bernhard A. Lansner, Anders Furber, Steve B. Large-Scale Simulations of Plastic Neural Networks on Neuromorphic Hardware |
title | Large-Scale Simulations of Plastic Neural Networks on Neuromorphic Hardware |
title_full | Large-Scale Simulations of Plastic Neural Networks on Neuromorphic Hardware |
title_fullStr | Large-Scale Simulations of Plastic Neural Networks on Neuromorphic Hardware |
title_full_unstemmed | Large-Scale Simulations of Plastic Neural Networks on Neuromorphic Hardware |
title_short | Large-Scale Simulations of Plastic Neural Networks on Neuromorphic Hardware |
title_sort | large-scale simulations of plastic neural networks on neuromorphic hardware |
topic | Neuroscience |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4823276/ https://www.ncbi.nlm.nih.gov/pubmed/27092061 http://dx.doi.org/10.3389/fnana.2016.00037 |
work_keys_str_mv | AT knightjamesc largescalesimulationsofplasticneuralnetworksonneuromorphichardware AT tullyphilipj largescalesimulationsofplasticneuralnetworksonneuromorphichardware AT kaplanbernharda largescalesimulationsofplasticneuralnetworksonneuromorphichardware AT lansneranders largescalesimulationsofplasticneuralnetworksonneuromorphichardware AT furbersteveb largescalesimulationsofplasticneuralnetworksonneuromorphichardware |