Cargando…

Neuron splitting in compute-bound parallel network simulations enables runtime scaling with twice as many processors

Neuron tree topology equations can be split into two subtrees and solved on different processors with no change in accuracy, stability, or computational effort; communication costs involve only sending and receiving two double precision values by each subtree at each time step. Splitting cells is us...

Descripción completa

Detalles Bibliográficos
Autores principales: Hines, Michael L., Eichner, Hubert, Schürmann, Felix
Formato: Texto
Lenguaje:English
Publicado: Springer US 2008
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2633940/
https://www.ncbi.nlm.nih.gov/pubmed/18214662
http://dx.doi.org/10.1007/s10827-007-0073-3
_version_ 1782164125323886592
author Hines, Michael L.
Eichner, Hubert
Schürmann, Felix
author_facet Hines, Michael L.
Eichner, Hubert
Schürmann, Felix
author_sort Hines, Michael L.
collection PubMed
description Neuron tree topology equations can be split into two subtrees and solved on different processors with no change in accuracy, stability, or computational effort; communication costs involve only sending and receiving two double precision values by each subtree at each time step. Splitting cells is useful in attaining load balance in neural network simulations, especially when there is a wide range of cell sizes and the number of cells is about the same as the number of processors. For compute-bound simulations load balance results in almost ideal runtime scaling. Application of the cell splitting method to two published network models exhibits good runtime scaling on twice as many processors as could be effectively used with whole-cell balancing.
format Text
id pubmed-2633940
institution National Center for Biotechnology Information
language English
publishDate 2008
publisher Springer US
record_format MEDLINE/PubMed
spelling pubmed-26339402009-02-02 Neuron splitting in compute-bound parallel network simulations enables runtime scaling with twice as many processors Hines, Michael L. Eichner, Hubert Schürmann, Felix J Comput Neurosci Article Neuron tree topology equations can be split into two subtrees and solved on different processors with no change in accuracy, stability, or computational effort; communication costs involve only sending and receiving two double precision values by each subtree at each time step. Splitting cells is useful in attaining load balance in neural network simulations, especially when there is a wide range of cell sizes and the number of cells is about the same as the number of processors. For compute-bound simulations load balance results in almost ideal runtime scaling. Application of the cell splitting method to two published network models exhibits good runtime scaling on twice as many processors as could be effectively used with whole-cell balancing. Springer US 2008-01-23 2008-08 /pmc/articles/PMC2633940/ /pubmed/18214662 http://dx.doi.org/10.1007/s10827-007-0073-3 Text en © Springer Science+Business Media, LLC 2007
spellingShingle Article
Hines, Michael L.
Eichner, Hubert
Schürmann, Felix
Neuron splitting in compute-bound parallel network simulations enables runtime scaling with twice as many processors
title Neuron splitting in compute-bound parallel network simulations enables runtime scaling with twice as many processors
title_full Neuron splitting in compute-bound parallel network simulations enables runtime scaling with twice as many processors
title_fullStr Neuron splitting in compute-bound parallel network simulations enables runtime scaling with twice as many processors
title_full_unstemmed Neuron splitting in compute-bound parallel network simulations enables runtime scaling with twice as many processors
title_short Neuron splitting in compute-bound parallel network simulations enables runtime scaling with twice as many processors
title_sort neuron splitting in compute-bound parallel network simulations enables runtime scaling with twice as many processors
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2633940/
https://www.ncbi.nlm.nih.gov/pubmed/18214662
http://dx.doi.org/10.1007/s10827-007-0073-3
work_keys_str_mv AT hinesmichaell neuronsplittingincomputeboundparallelnetworksimulationsenablesruntimescalingwithtwiceasmanyprocessors
AT eichnerhubert neuronsplittingincomputeboundparallelnetworksimulationsenablesruntimescalingwithtwiceasmanyprocessors
AT schurmannfelix neuronsplittingincomputeboundparallelnetworksimulationsenablesruntimescalingwithtwiceasmanyprocessors