Cargando…

Communication Sparsity in Distributed Spiking Neural Network Simulations to Improve Scalability

In the last decade there has been a surge in the number of big science projects interested in achieving a comprehensive understanding of the functions of the brain, using Spiking Neuronal Network (SNN) simulations to aid discovery and experimentation. Such an approach increases the computational dem...

Descripción completa

Detalles Bibliográficos
Autores principales: Fernandez-Musoles, Carlos, Coca, Daniel, Richmond, Paul
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Frontiers Media S.A. 2019
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6454199/
https://www.ncbi.nlm.nih.gov/pubmed/31001102
http://dx.doi.org/10.3389/fninf.2019.00019
_version_ 1783409527999168512
author Fernandez-Musoles, Carlos
Coca, Daniel
Richmond, Paul
author_facet Fernandez-Musoles, Carlos
Coca, Daniel
Richmond, Paul
author_sort Fernandez-Musoles, Carlos
collection PubMed
description In the last decade there has been a surge in the number of big science projects interested in achieving a comprehensive understanding of the functions of the brain, using Spiking Neuronal Network (SNN) simulations to aid discovery and experimentation. Such an approach increases the computational demands on SNN simulators: if natural scale brain-size simulations are to be realized, it is necessary to use parallel and distributed models of computing. Communication is recognized as the dominant part of distributed SNN simulations. As the number of computational nodes increases, the proportion of time the simulation spends in useful computing (computational efficiency) is reduced and therefore applies a limit to scalability. This work targets the three phases of communication to improve overall computational efficiency in distributed simulations: implicit synchronization, process handshake and data exchange. We introduce a connectivity-aware allocation of neurons to compute nodes by modeling the SNN as a hypergraph. Partitioning the hypergraph to reduce interprocess communication increases the sparsity of the communication graph. We propose dynamic sparse exchange as an improvement over simple point-to-point exchange on sparse communications. Results show a combined gain when using hypergraph-based allocation and dynamic sparse communication, increasing computational efficiency by up to 40.8 percentage points and reducing simulation time by up to 73%. The findings are applicable to other distributed complex system simulations in which communication is modeled as a graph network.
format Online
Article
Text
id pubmed-6454199
institution National Center for Biotechnology Information
language English
publishDate 2019
publisher Frontiers Media S.A.
record_format MEDLINE/PubMed
spelling pubmed-64541992019-04-18 Communication Sparsity in Distributed Spiking Neural Network Simulations to Improve Scalability Fernandez-Musoles, Carlos Coca, Daniel Richmond, Paul Front Neuroinform Neuroscience In the last decade there has been a surge in the number of big science projects interested in achieving a comprehensive understanding of the functions of the brain, using Spiking Neuronal Network (SNN) simulations to aid discovery and experimentation. Such an approach increases the computational demands on SNN simulators: if natural scale brain-size simulations are to be realized, it is necessary to use parallel and distributed models of computing. Communication is recognized as the dominant part of distributed SNN simulations. As the number of computational nodes increases, the proportion of time the simulation spends in useful computing (computational efficiency) is reduced and therefore applies a limit to scalability. This work targets the three phases of communication to improve overall computational efficiency in distributed simulations: implicit synchronization, process handshake and data exchange. We introduce a connectivity-aware allocation of neurons to compute nodes by modeling the SNN as a hypergraph. Partitioning the hypergraph to reduce interprocess communication increases the sparsity of the communication graph. We propose dynamic sparse exchange as an improvement over simple point-to-point exchange on sparse communications. Results show a combined gain when using hypergraph-based allocation and dynamic sparse communication, increasing computational efficiency by up to 40.8 percentage points and reducing simulation time by up to 73%. The findings are applicable to other distributed complex system simulations in which communication is modeled as a graph network. Frontiers Media S.A. 2019-04-02 /pmc/articles/PMC6454199/ /pubmed/31001102 http://dx.doi.org/10.3389/fninf.2019.00019 Text en Copyright © 2019 Fernandez-Musoles, Coca and Richmond. http://creativecommons.org/licenses/by/4.0/ This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
spellingShingle Neuroscience
Fernandez-Musoles, Carlos
Coca, Daniel
Richmond, Paul
Communication Sparsity in Distributed Spiking Neural Network Simulations to Improve Scalability
title Communication Sparsity in Distributed Spiking Neural Network Simulations to Improve Scalability
title_full Communication Sparsity in Distributed Spiking Neural Network Simulations to Improve Scalability
title_fullStr Communication Sparsity in Distributed Spiking Neural Network Simulations to Improve Scalability
title_full_unstemmed Communication Sparsity in Distributed Spiking Neural Network Simulations to Improve Scalability
title_short Communication Sparsity in Distributed Spiking Neural Network Simulations to Improve Scalability
title_sort communication sparsity in distributed spiking neural network simulations to improve scalability
topic Neuroscience
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6454199/
https://www.ncbi.nlm.nih.gov/pubmed/31001102
http://dx.doi.org/10.3389/fninf.2019.00019
work_keys_str_mv AT fernandezmusolescarlos communicationsparsityindistributedspikingneuralnetworksimulationstoimprovescalability
AT cocadaniel communicationsparsityindistributedspikingneuralnetworksimulationstoimprovescalability
AT richmondpaul communicationsparsityindistributedspikingneuralnetworksimulationstoimprovescalability