Cargando…
Optimizing BCPNN Learning Rule for Memory Access
Simulation of large scale biologically plausible spiking neural networks, e.g., Bayesian Confidence Propagation Neural Network (BCPNN), usually requires high-performance supercomputers with dedicated accelerators, such as GPUs, FPGAs, or even Application-Specific Integrated Circuits (ASICs). Almost...
Autores principales: | , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Frontiers Media S.A.
2020
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7487417/ https://www.ncbi.nlm.nih.gov/pubmed/32982673 http://dx.doi.org/10.3389/fnins.2020.00878 |
_version_ | 1783581480588410880 |
---|---|
author | Yang, Yu Stathis, Dimitrios Jordão, Rodolfo Hemani, Ahmed Lansner, Anders |
author_facet | Yang, Yu Stathis, Dimitrios Jordão, Rodolfo Hemani, Ahmed Lansner, Anders |
author_sort | Yang, Yu |
collection | PubMed |
description | Simulation of large scale biologically plausible spiking neural networks, e.g., Bayesian Confidence Propagation Neural Network (BCPNN), usually requires high-performance supercomputers with dedicated accelerators, such as GPUs, FPGAs, or even Application-Specific Integrated Circuits (ASICs). Almost all of these computers are based on the von Neumann architecture that separates storage and computation. In all these solutions, memory access is the dominant cost even for highly customized computation and memory architecture, such as ASICs. In this paper, we propose an optimization technique that can make the BCPNN simulation memory access friendly by avoiding a dual-access pattern. The BCPNN synaptic traces and weights are organized as matrices accessed both row-wise and column-wise. Accessing data stored in DRAM with a dual-access pattern is extremely expensive. A post-synaptic history buffer and an approximation function thus are introduced to eliminate the troublesome column update. The error analysis combining theoretical analysis and experiments suggests that the probability of introducing intolerable errors by such optimization can be bounded to a very small number, which makes it almost negligible. Derivation and validation of such a bound is the core contribution of this paper. Experiments on a GPU platform shows that compared to the previously reported baseline simulation strategy, the proposed optimization technique reduces the storage requirement by 33%, the global memory access demand by more than 27% and DRAM access rate by more than 5%; the latency of updating synaptic traces decreases by roughly 50%. Compared with the other similar optimization technique reported in the literature, our method clearly shows considerably better results. Although the BCPNN is used as the targeted neural network model, the proposed optimization method can be applied to other artificial neural network models based on a Hebbian learning rule. |
format | Online Article Text |
id | pubmed-7487417 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2020 |
publisher | Frontiers Media S.A. |
record_format | MEDLINE/PubMed |
spelling | pubmed-74874172020-09-25 Optimizing BCPNN Learning Rule for Memory Access Yang, Yu Stathis, Dimitrios Jordão, Rodolfo Hemani, Ahmed Lansner, Anders Front Neurosci Neuroscience Simulation of large scale biologically plausible spiking neural networks, e.g., Bayesian Confidence Propagation Neural Network (BCPNN), usually requires high-performance supercomputers with dedicated accelerators, such as GPUs, FPGAs, or even Application-Specific Integrated Circuits (ASICs). Almost all of these computers are based on the von Neumann architecture that separates storage and computation. In all these solutions, memory access is the dominant cost even for highly customized computation and memory architecture, such as ASICs. In this paper, we propose an optimization technique that can make the BCPNN simulation memory access friendly by avoiding a dual-access pattern. The BCPNN synaptic traces and weights are organized as matrices accessed both row-wise and column-wise. Accessing data stored in DRAM with a dual-access pattern is extremely expensive. A post-synaptic history buffer and an approximation function thus are introduced to eliminate the troublesome column update. The error analysis combining theoretical analysis and experiments suggests that the probability of introducing intolerable errors by such optimization can be bounded to a very small number, which makes it almost negligible. Derivation and validation of such a bound is the core contribution of this paper. Experiments on a GPU platform shows that compared to the previously reported baseline simulation strategy, the proposed optimization technique reduces the storage requirement by 33%, the global memory access demand by more than 27% and DRAM access rate by more than 5%; the latency of updating synaptic traces decreases by roughly 50%. Compared with the other similar optimization technique reported in the literature, our method clearly shows considerably better results. Although the BCPNN is used as the targeted neural network model, the proposed optimization method can be applied to other artificial neural network models based on a Hebbian learning rule. Frontiers Media S.A. 2020-08-31 /pmc/articles/PMC7487417/ /pubmed/32982673 http://dx.doi.org/10.3389/fnins.2020.00878 Text en Copyright © 2020 Yang, Stathis, Jordão, Hemani and Lansner. http://creativecommons.org/licenses/by/4.0/ This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms. |
spellingShingle | Neuroscience Yang, Yu Stathis, Dimitrios Jordão, Rodolfo Hemani, Ahmed Lansner, Anders Optimizing BCPNN Learning Rule for Memory Access |
title | Optimizing BCPNN Learning Rule for Memory Access |
title_full | Optimizing BCPNN Learning Rule for Memory Access |
title_fullStr | Optimizing BCPNN Learning Rule for Memory Access |
title_full_unstemmed | Optimizing BCPNN Learning Rule for Memory Access |
title_short | Optimizing BCPNN Learning Rule for Memory Access |
title_sort | optimizing bcpnn learning rule for memory access |
topic | Neuroscience |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7487417/ https://www.ncbi.nlm.nih.gov/pubmed/32982673 http://dx.doi.org/10.3389/fnins.2020.00878 |
work_keys_str_mv | AT yangyu optimizingbcpnnlearningruleformemoryaccess AT stathisdimitrios optimizingbcpnnlearningruleformemoryaccess AT jordaorodolfo optimizingbcpnnlearningruleformemoryaccess AT hemaniahmed optimizingbcpnnlearningruleformemoryaccess AT lansneranders optimizingbcpnnlearningruleformemoryaccess |