Cargando…

Dynamic parallelism for synaptic updating in GPU-accelerated spiking neural network simulations

Graphical processing units (GPUs) can significantly accelerate spiking neural network (SNN) simulations by exploiting parallelism for independent computations. Both the changes in membrane potential at each time-step, and checking for spiking threshold crossings for each neuron, can be calculated in...

Descripción completa

Detalles Bibliográficos
Autores principales: Kasap, Bahadir, van Opstal, A. John
Formato: Online Artículo Texto
Lenguaje:English
Publicado: 2018
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6147227/
https://www.ncbi.nlm.nih.gov/pubmed/30245550
http://dx.doi.org/10.1016/j.neucom.2018.04.007
_version_ 1783356531644825600
author Kasap, Bahadir
van Opstal, A. John
author_facet Kasap, Bahadir
van Opstal, A. John
author_sort Kasap, Bahadir
collection PubMed
description Graphical processing units (GPUs) can significantly accelerate spiking neural network (SNN) simulations by exploiting parallelism for independent computations. Both the changes in membrane potential at each time-step, and checking for spiking threshold crossings for each neuron, can be calculated independently. However, because synaptic transmission requires communication between many different neurons, efficient parallel processing may be hindered, either by data transfers between GPU and CPU at each time-step or, alternatively, by running many parallel computations for neurons that do not elicit any spikes. This, in turn, would lower the effective throughput of the simulations. Traditionally, a central processing unit (CPU, host) administers the execution of parallel processes on the GPU (device), such as memory initialization on the device, data transfer between host and device, and starting and synchronizing parallel processes. The parallel computing platform CUDA 5.0 introduced dynamic parallelism, which allows the initiation of new parallel applications within an ongoing parallel kernel. Here, we apply dynamic parallelism for synaptic updating in SNN simulations on a GPU. Our algorithm eliminates the need to start many parallel applications at each time-step, and the associated lags of data transfer between CPU and GPU memories. We report a significant speed-up of SNN simulations, when compared to former accelerated parallelization strategies for SNNs on a GPU.
format Online
Article
Text
id pubmed-6147227
institution National Center for Biotechnology Information
language English
publishDate 2018
record_format MEDLINE/PubMed
spelling pubmed-61472272018-09-20 Dynamic parallelism for synaptic updating in GPU-accelerated spiking neural network simulations Kasap, Bahadir van Opstal, A. John Neurocomputing Article Graphical processing units (GPUs) can significantly accelerate spiking neural network (SNN) simulations by exploiting parallelism for independent computations. Both the changes in membrane potential at each time-step, and checking for spiking threshold crossings for each neuron, can be calculated independently. However, because synaptic transmission requires communication between many different neurons, efficient parallel processing may be hindered, either by data transfers between GPU and CPU at each time-step or, alternatively, by running many parallel computations for neurons that do not elicit any spikes. This, in turn, would lower the effective throughput of the simulations. Traditionally, a central processing unit (CPU, host) administers the execution of parallel processes on the GPU (device), such as memory initialization on the device, data transfer between host and device, and starting and synchronizing parallel processes. The parallel computing platform CUDA 5.0 introduced dynamic parallelism, which allows the initiation of new parallel applications within an ongoing parallel kernel. Here, we apply dynamic parallelism for synaptic updating in SNN simulations on a GPU. Our algorithm eliminates the need to start many parallel applications at each time-step, and the associated lags of data transfer between CPU and GPU memories. We report a significant speed-up of SNN simulations, when compared to former accelerated parallelization strategies for SNNs on a GPU. 2018-05-02 /pmc/articles/PMC6147227/ /pubmed/30245550 http://dx.doi.org/10.1016/j.neucom.2018.04.007 Text en http://creativecommons.org/licenses/by-nc-nd/4.0/ This is an open access article under the CC BY-NC-ND license. (http://creativecommons.org/licenses/by-nc-nd/4.0/)
spellingShingle Article
Kasap, Bahadir
van Opstal, A. John
Dynamic parallelism for synaptic updating in GPU-accelerated spiking neural network simulations
title Dynamic parallelism for synaptic updating in GPU-accelerated spiking neural network simulations
title_full Dynamic parallelism for synaptic updating in GPU-accelerated spiking neural network simulations
title_fullStr Dynamic parallelism for synaptic updating in GPU-accelerated spiking neural network simulations
title_full_unstemmed Dynamic parallelism for synaptic updating in GPU-accelerated spiking neural network simulations
title_short Dynamic parallelism for synaptic updating in GPU-accelerated spiking neural network simulations
title_sort dynamic parallelism for synaptic updating in gpu-accelerated spiking neural network simulations
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6147227/
https://www.ncbi.nlm.nih.gov/pubmed/30245550
http://dx.doi.org/10.1016/j.neucom.2018.04.007
work_keys_str_mv AT kasapbahadir dynamicparallelismforsynapticupdatingingpuacceleratedspikingneuralnetworksimulations
AT vanopstalajohn dynamicparallelismforsynapticupdatingingpuacceleratedspikingneuralnetworksimulations