Cargando…

Routing Brain Traffic Through the Von Neumann Bottleneck: Parallel Sorting and Refactoring

Generic simulation code for spiking neuronal networks spends the major part of the time in the phase where spikes have arrived at a compute node and need to be delivered to their target neurons. These spikes were emitted over the last interval between communication steps by source neurons distribute...

Descripción completa

Detalles Bibliográficos
Autores principales: Pronold, Jari, Jordan, Jakob, Wylie, Brian J. N., Kitayama, Itaru, Diesmann, Markus, Kunkel, Susanne
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Frontiers Media S.A. 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8921864/
https://www.ncbi.nlm.nih.gov/pubmed/35300490
http://dx.doi.org/10.3389/fninf.2021.785068
_version_ 1784669404515008512
author Pronold, Jari
Jordan, Jakob
Wylie, Brian J. N.
Kitayama, Itaru
Diesmann, Markus
Kunkel, Susanne
author_facet Pronold, Jari
Jordan, Jakob
Wylie, Brian J. N.
Kitayama, Itaru
Diesmann, Markus
Kunkel, Susanne
author_sort Pronold, Jari
collection PubMed
description Generic simulation code for spiking neuronal networks spends the major part of the time in the phase where spikes have arrived at a compute node and need to be delivered to their target neurons. These spikes were emitted over the last interval between communication steps by source neurons distributed across many compute nodes and are inherently irregular and unsorted with respect to their targets. For finding those targets, the spikes need to be dispatched to a three-dimensional data structure with decisions on target thread and synapse type to be made on the way. With growing network size, a compute node receives spikes from an increasing number of different source neurons until in the limit each synapse on the compute node has a unique source. Here, we show analytically how this sparsity emerges over the practically relevant range of network sizes from a hundred thousand to a billion neurons. By profiling a production code we investigate opportunities for algorithmic changes to avoid indirections and branching. Every thread hosts an equal share of the neurons on a compute node. In the original algorithm, all threads search through all spikes to pick out the relevant ones. With increasing network size, the fraction of hits remains invariant but the absolute number of rejections grows. Our new alternative algorithm equally divides the spikes among the threads and immediately sorts them in parallel according to target thread and synapse type. After this, every thread completes delivery solely of the section of spikes for its own neurons. Independent of the number of threads, all spikes are looked at only two times. The new algorithm halves the number of instructions in spike delivery which leads to a reduction of simulation time of up to 40 %. Thus, spike delivery is a fully parallelizable process with a single synchronization point and thereby well suited for many-core systems. Our analysis indicates that further progress requires a reduction of the latency that the instructions experience in accessing memory. The study provides the foundation for the exploration of methods of latency hiding like software pipelining and software-induced prefetching.
format Online
Article
Text
id pubmed-8921864
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher Frontiers Media S.A.
record_format MEDLINE/PubMed
spelling pubmed-89218642022-03-16 Routing Brain Traffic Through the Von Neumann Bottleneck: Parallel Sorting and Refactoring Pronold, Jari Jordan, Jakob Wylie, Brian J. N. Kitayama, Itaru Diesmann, Markus Kunkel, Susanne Front Neuroinform Neuroscience Generic simulation code for spiking neuronal networks spends the major part of the time in the phase where spikes have arrived at a compute node and need to be delivered to their target neurons. These spikes were emitted over the last interval between communication steps by source neurons distributed across many compute nodes and are inherently irregular and unsorted with respect to their targets. For finding those targets, the spikes need to be dispatched to a three-dimensional data structure with decisions on target thread and synapse type to be made on the way. With growing network size, a compute node receives spikes from an increasing number of different source neurons until in the limit each synapse on the compute node has a unique source. Here, we show analytically how this sparsity emerges over the practically relevant range of network sizes from a hundred thousand to a billion neurons. By profiling a production code we investigate opportunities for algorithmic changes to avoid indirections and branching. Every thread hosts an equal share of the neurons on a compute node. In the original algorithm, all threads search through all spikes to pick out the relevant ones. With increasing network size, the fraction of hits remains invariant but the absolute number of rejections grows. Our new alternative algorithm equally divides the spikes among the threads and immediately sorts them in parallel according to target thread and synapse type. After this, every thread completes delivery solely of the section of spikes for its own neurons. Independent of the number of threads, all spikes are looked at only two times. The new algorithm halves the number of instructions in spike delivery which leads to a reduction of simulation time of up to 40 %. Thus, spike delivery is a fully parallelizable process with a single synchronization point and thereby well suited for many-core systems. Our analysis indicates that further progress requires a reduction of the latency that the instructions experience in accessing memory. The study provides the foundation for the exploration of methods of latency hiding like software pipelining and software-induced prefetching. Frontiers Media S.A. 2022-03-01 /pmc/articles/PMC8921864/ /pubmed/35300490 http://dx.doi.org/10.3389/fninf.2021.785068 Text en Copyright © 2022 Pronold, Jordan, Wylie, Kitayama, Diesmann and Kunkel. https://creativecommons.org/licenses/by/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
spellingShingle Neuroscience
Pronold, Jari
Jordan, Jakob
Wylie, Brian J. N.
Kitayama, Itaru
Diesmann, Markus
Kunkel, Susanne
Routing Brain Traffic Through the Von Neumann Bottleneck: Parallel Sorting and Refactoring
title Routing Brain Traffic Through the Von Neumann Bottleneck: Parallel Sorting and Refactoring
title_full Routing Brain Traffic Through the Von Neumann Bottleneck: Parallel Sorting and Refactoring
title_fullStr Routing Brain Traffic Through the Von Neumann Bottleneck: Parallel Sorting and Refactoring
title_full_unstemmed Routing Brain Traffic Through the Von Neumann Bottleneck: Parallel Sorting and Refactoring
title_short Routing Brain Traffic Through the Von Neumann Bottleneck: Parallel Sorting and Refactoring
title_sort routing brain traffic through the von neumann bottleneck: parallel sorting and refactoring
topic Neuroscience
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8921864/
https://www.ncbi.nlm.nih.gov/pubmed/35300490
http://dx.doi.org/10.3389/fninf.2021.785068
work_keys_str_mv AT pronoldjari routingbraintrafficthroughthevonneumannbottleneckparallelsortingandrefactoring
AT jordanjakob routingbraintrafficthroughthevonneumannbottleneckparallelsortingandrefactoring
AT wyliebrianjn routingbraintrafficthroughthevonneumannbottleneckparallelsortingandrefactoring
AT kitayamaitaru routingbraintrafficthroughthevonneumannbottleneckparallelsortingandrefactoring
AT diesmannmarkus routingbraintrafficthroughthevonneumannbottleneckparallelsortingandrefactoring
AT kunkelsusanne routingbraintrafficthroughthevonneumannbottleneckparallelsortingandrefactoring