Cargando…

Event- and Time-Driven Techniques Using Parallel CPU-GPU Co-processing for Spiking Neural Networks

Modeling and simulating the neural structures which make up our central neural system is instrumental for deciphering the computational neural cues beneath. Higher levels of biological plausibility usually impose higher levels of complexity in mathematical modeling, from neural to behavioral levels....

Descripción completa

Detalles Bibliográficos
Autores principales: Naveros, Francisco, Garrido, Jesus A., Carrillo, Richard R., Ros, Eduardo, Luque, Niceto R.
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Frontiers Media S.A. 2017
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5293783/
https://www.ncbi.nlm.nih.gov/pubmed/28223930
http://dx.doi.org/10.3389/fninf.2017.00007
_version_ 1782505129592750080
author Naveros, Francisco
Garrido, Jesus A.
Carrillo, Richard R.
Ros, Eduardo
Luque, Niceto R.
author_facet Naveros, Francisco
Garrido, Jesus A.
Carrillo, Richard R.
Ros, Eduardo
Luque, Niceto R.
author_sort Naveros, Francisco
collection PubMed
description Modeling and simulating the neural structures which make up our central neural system is instrumental for deciphering the computational neural cues beneath. Higher levels of biological plausibility usually impose higher levels of complexity in mathematical modeling, from neural to behavioral levels. This paper focuses on overcoming the simulation problems (accuracy and performance) derived from using higher levels of mathematical complexity at a neural level. This study proposes different techniques for simulating neural models that hold incremental levels of mathematical complexity: leaky integrate-and-fire (LIF), adaptive exponential integrate-and-fire (AdEx), and Hodgkin-Huxley (HH) neural models (ranged from low to high neural complexity). The studied techniques are classified into two main families depending on how the neural-model dynamic evaluation is computed: the event-driven or the time-driven families. Whilst event-driven techniques pre-compile and store the neural dynamics within look-up tables, time-driven techniques compute the neural dynamics iteratively during the simulation time. We propose two modifications for the event-driven family: a look-up table recombination to better cope with the incremental neural complexity together with a better handling of the synchronous input activity. Regarding the time-driven family, we propose a modification in computing the neural dynamics: the bi-fixed-step integration method. This method automatically adjusts the simulation step size to better cope with the stiffness of the neural model dynamics running in CPU platforms. One version of this method is also implemented for hybrid CPU-GPU platforms. Finally, we analyze how the performance and accuracy of these modifications evolve with increasing levels of neural complexity. We also demonstrate how the proposed modifications which constitute the main contribution of this study systematically outperform the traditional event- and time-driven techniques under increasing levels of neural complexity.
format Online
Article
Text
id pubmed-5293783
institution National Center for Biotechnology Information
language English
publishDate 2017
publisher Frontiers Media S.A.
record_format MEDLINE/PubMed
spelling pubmed-52937832017-02-21 Event- and Time-Driven Techniques Using Parallel CPU-GPU Co-processing for Spiking Neural Networks Naveros, Francisco Garrido, Jesus A. Carrillo, Richard R. Ros, Eduardo Luque, Niceto R. Front Neuroinform Neuroscience Modeling and simulating the neural structures which make up our central neural system is instrumental for deciphering the computational neural cues beneath. Higher levels of biological plausibility usually impose higher levels of complexity in mathematical modeling, from neural to behavioral levels. This paper focuses on overcoming the simulation problems (accuracy and performance) derived from using higher levels of mathematical complexity at a neural level. This study proposes different techniques for simulating neural models that hold incremental levels of mathematical complexity: leaky integrate-and-fire (LIF), adaptive exponential integrate-and-fire (AdEx), and Hodgkin-Huxley (HH) neural models (ranged from low to high neural complexity). The studied techniques are classified into two main families depending on how the neural-model dynamic evaluation is computed: the event-driven or the time-driven families. Whilst event-driven techniques pre-compile and store the neural dynamics within look-up tables, time-driven techniques compute the neural dynamics iteratively during the simulation time. We propose two modifications for the event-driven family: a look-up table recombination to better cope with the incremental neural complexity together with a better handling of the synchronous input activity. Regarding the time-driven family, we propose a modification in computing the neural dynamics: the bi-fixed-step integration method. This method automatically adjusts the simulation step size to better cope with the stiffness of the neural model dynamics running in CPU platforms. One version of this method is also implemented for hybrid CPU-GPU platforms. Finally, we analyze how the performance and accuracy of these modifications evolve with increasing levels of neural complexity. We also demonstrate how the proposed modifications which constitute the main contribution of this study systematically outperform the traditional event- and time-driven techniques under increasing levels of neural complexity. Frontiers Media S.A. 2017-02-07 /pmc/articles/PMC5293783/ /pubmed/28223930 http://dx.doi.org/10.3389/fninf.2017.00007 Text en Copyright © 2017 Naveros, Garrido, Carrillo, Ros and Luque. http://creativecommons.org/licenses/by/4.0/ This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
spellingShingle Neuroscience
Naveros, Francisco
Garrido, Jesus A.
Carrillo, Richard R.
Ros, Eduardo
Luque, Niceto R.
Event- and Time-Driven Techniques Using Parallel CPU-GPU Co-processing for Spiking Neural Networks
title Event- and Time-Driven Techniques Using Parallel CPU-GPU Co-processing for Spiking Neural Networks
title_full Event- and Time-Driven Techniques Using Parallel CPU-GPU Co-processing for Spiking Neural Networks
title_fullStr Event- and Time-Driven Techniques Using Parallel CPU-GPU Co-processing for Spiking Neural Networks
title_full_unstemmed Event- and Time-Driven Techniques Using Parallel CPU-GPU Co-processing for Spiking Neural Networks
title_short Event- and Time-Driven Techniques Using Parallel CPU-GPU Co-processing for Spiking Neural Networks
title_sort event- and time-driven techniques using parallel cpu-gpu co-processing for spiking neural networks
topic Neuroscience
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5293783/
https://www.ncbi.nlm.nih.gov/pubmed/28223930
http://dx.doi.org/10.3389/fninf.2017.00007
work_keys_str_mv AT naverosfrancisco eventandtimedriventechniquesusingparallelcpugpucoprocessingforspikingneuralnetworks
AT garridojesusa eventandtimedriventechniquesusingparallelcpugpucoprocessingforspikingneuralnetworks
AT carrillorichardr eventandtimedriventechniquesusingparallelcpugpucoprocessingforspikingneuralnetworks
AT roseduardo eventandtimedriventechniquesusingparallelcpugpucoprocessingforspikingneuralnetworks
AT luquenicetor eventandtimedriventechniquesusingparallelcpugpucoprocessingforspikingneuralnetworks