Cargando…

Constructing Neuronal Network Models in Massively Parallel Environments

Recent advances in the development of data structures to represent spiking neuron network models enable us to exploit the complete memory of petascale computers for a single brain-scale network simulation. In this work, we investigate how well we can exploit the computing power of such supercomputer...

Descripción completa

Detalles Bibliográficos
Autores principales: Ippen, Tammo, Eppler, Jochen M., Plesser, Hans E., Diesmann, Markus
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Frontiers Media S.A. 2017
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5432669/
https://www.ncbi.nlm.nih.gov/pubmed/28559808
http://dx.doi.org/10.3389/fninf.2017.00030
_version_ 1783236681990668288
author Ippen, Tammo
Eppler, Jochen M.
Plesser, Hans E.
Diesmann, Markus
author_facet Ippen, Tammo
Eppler, Jochen M.
Plesser, Hans E.
Diesmann, Markus
author_sort Ippen, Tammo
collection PubMed
description Recent advances in the development of data structures to represent spiking neuron network models enable us to exploit the complete memory of petascale computers for a single brain-scale network simulation. In this work, we investigate how well we can exploit the computing power of such supercomputers for the creation of neuronal networks. Using an established benchmark, we divide the runtime of simulation code into the phase of network construction and the phase during which the dynamical state is advanced in time. We find that on multi-core compute nodes network creation scales well with process-parallel code but exhibits a prohibitively large memory consumption. Thread-parallel network creation, in contrast, exhibits speedup only up to a small number of threads but has little overhead in terms of memory. We further observe that the algorithms creating instances of model neurons and their connections scale well for networks of ten thousand neurons, but do not show the same speedup for networks of millions of neurons. Our work uncovers that the lack of scaling of thread-parallel network creation is due to inadequate memory allocation strategies and demonstrates that thread-optimized memory allocators recover excellent scaling. An analysis of the loop order used for network construction reveals that more complex tests on the locality of operations significantly improve scaling and reduce runtime by allowing construction algorithms to step through large networks more efficiently than in existing code. The combination of these techniques increases performance by an order of magnitude and harnesses the increasingly parallel compute power of the compute nodes in high-performance clusters and supercomputers.
format Online
Article
Text
id pubmed-5432669
institution National Center for Biotechnology Information
language English
publishDate 2017
publisher Frontiers Media S.A.
record_format MEDLINE/PubMed
spelling pubmed-54326692017-05-30 Constructing Neuronal Network Models in Massively Parallel Environments Ippen, Tammo Eppler, Jochen M. Plesser, Hans E. Diesmann, Markus Front Neuroinform Neuroscience Recent advances in the development of data structures to represent spiking neuron network models enable us to exploit the complete memory of petascale computers for a single brain-scale network simulation. In this work, we investigate how well we can exploit the computing power of such supercomputers for the creation of neuronal networks. Using an established benchmark, we divide the runtime of simulation code into the phase of network construction and the phase during which the dynamical state is advanced in time. We find that on multi-core compute nodes network creation scales well with process-parallel code but exhibits a prohibitively large memory consumption. Thread-parallel network creation, in contrast, exhibits speedup only up to a small number of threads but has little overhead in terms of memory. We further observe that the algorithms creating instances of model neurons and their connections scale well for networks of ten thousand neurons, but do not show the same speedup for networks of millions of neurons. Our work uncovers that the lack of scaling of thread-parallel network creation is due to inadequate memory allocation strategies and demonstrates that thread-optimized memory allocators recover excellent scaling. An analysis of the loop order used for network construction reveals that more complex tests on the locality of operations significantly improve scaling and reduce runtime by allowing construction algorithms to step through large networks more efficiently than in existing code. The combination of these techniques increases performance by an order of magnitude and harnesses the increasingly parallel compute power of the compute nodes in high-performance clusters and supercomputers. Frontiers Media S.A. 2017-05-16 /pmc/articles/PMC5432669/ /pubmed/28559808 http://dx.doi.org/10.3389/fninf.2017.00030 Text en Copyright © 2017 Ippen, Eppler, Plesser and Diesmann. http://creativecommons.org/licenses/by/4.0/ This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
spellingShingle Neuroscience
Ippen, Tammo
Eppler, Jochen M.
Plesser, Hans E.
Diesmann, Markus
Constructing Neuronal Network Models in Massively Parallel Environments
title Constructing Neuronal Network Models in Massively Parallel Environments
title_full Constructing Neuronal Network Models in Massively Parallel Environments
title_fullStr Constructing Neuronal Network Models in Massively Parallel Environments
title_full_unstemmed Constructing Neuronal Network Models in Massively Parallel Environments
title_short Constructing Neuronal Network Models in Massively Parallel Environments
title_sort constructing neuronal network models in massively parallel environments
topic Neuroscience
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5432669/
https://www.ncbi.nlm.nih.gov/pubmed/28559808
http://dx.doi.org/10.3389/fninf.2017.00030
work_keys_str_mv AT ippentammo constructingneuronalnetworkmodelsinmassivelyparallelenvironments
AT epplerjochenm constructingneuronalnetworkmodelsinmassivelyparallelenvironments
AT plesserhanse constructingneuronalnetworkmodelsinmassivelyparallelenvironments
AT diesmannmarkus constructingneuronalnetworkmodelsinmassivelyparallelenvironments