Cargando…

A scalable implementation of the recursive least-squares algorithm for training spiking neural networks

Training spiking recurrent neural networks on neuronal recordings or behavioral tasks has become a popular way to study computations performed by the nervous system. As the size and complexity of neural recordings increase, there is a need for efficient algorithms that can train models in a short pe...

Descripción completa

Detalles Bibliográficos
Autores principales: Arthur, Benjamin J., Kim, Christopher M., Chen, Susu, Preibisch, Stephan, Darshan, Ran
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Frontiers Media S.A. 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10333503/
https://www.ncbi.nlm.nih.gov/pubmed/37441157
http://dx.doi.org/10.3389/fninf.2023.1099510
_version_ 1785070674068373504
author Arthur, Benjamin J.
Kim, Christopher M.
Chen, Susu
Preibisch, Stephan
Darshan, Ran
author_facet Arthur, Benjamin J.
Kim, Christopher M.
Chen, Susu
Preibisch, Stephan
Darshan, Ran
author_sort Arthur, Benjamin J.
collection PubMed
description Training spiking recurrent neural networks on neuronal recordings or behavioral tasks has become a popular way to study computations performed by the nervous system. As the size and complexity of neural recordings increase, there is a need for efficient algorithms that can train models in a short period of time using minimal resources. We present optimized CPU and GPU implementations of the recursive least-squares algorithm in spiking neural networks. The GPU implementation can train networks of one million neurons, with 100 million plastic synapses and a billion static synapses, about 1,000 times faster than an unoptimized reference CPU implementation. We demonstrate the code's utility by training a network, in less than an hour, to reproduce the activity of > 66, 000 recorded neurons of a mouse performing a decision-making task. The fast implementation enables a more interactive in-silico study of the dynamics and connectivity underlying multi-area computations. It also admits the possibility to train models as in-vivo experiments are being conducted, thus closing the loop between modeling and experiments.
format Online
Article
Text
id pubmed-10333503
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher Frontiers Media S.A.
record_format MEDLINE/PubMed
spelling pubmed-103335032023-07-12 A scalable implementation of the recursive least-squares algorithm for training spiking neural networks Arthur, Benjamin J. Kim, Christopher M. Chen, Susu Preibisch, Stephan Darshan, Ran Front Neuroinform Neuroscience Training spiking recurrent neural networks on neuronal recordings or behavioral tasks has become a popular way to study computations performed by the nervous system. As the size and complexity of neural recordings increase, there is a need for efficient algorithms that can train models in a short period of time using minimal resources. We present optimized CPU and GPU implementations of the recursive least-squares algorithm in spiking neural networks. The GPU implementation can train networks of one million neurons, with 100 million plastic synapses and a billion static synapses, about 1,000 times faster than an unoptimized reference CPU implementation. We demonstrate the code's utility by training a network, in less than an hour, to reproduce the activity of > 66, 000 recorded neurons of a mouse performing a decision-making task. The fast implementation enables a more interactive in-silico study of the dynamics and connectivity underlying multi-area computations. It also admits the possibility to train models as in-vivo experiments are being conducted, thus closing the loop between modeling and experiments. Frontiers Media S.A. 2023-06-27 /pmc/articles/PMC10333503/ /pubmed/37441157 http://dx.doi.org/10.3389/fninf.2023.1099510 Text en Copyright © 2023 Arthur, Kim, Chen, Preibisch and Darshan. https://creativecommons.org/licenses/by/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
spellingShingle Neuroscience
Arthur, Benjamin J.
Kim, Christopher M.
Chen, Susu
Preibisch, Stephan
Darshan, Ran
A scalable implementation of the recursive least-squares algorithm for training spiking neural networks
title A scalable implementation of the recursive least-squares algorithm for training spiking neural networks
title_full A scalable implementation of the recursive least-squares algorithm for training spiking neural networks
title_fullStr A scalable implementation of the recursive least-squares algorithm for training spiking neural networks
title_full_unstemmed A scalable implementation of the recursive least-squares algorithm for training spiking neural networks
title_short A scalable implementation of the recursive least-squares algorithm for training spiking neural networks
title_sort scalable implementation of the recursive least-squares algorithm for training spiking neural networks
topic Neuroscience
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10333503/
https://www.ncbi.nlm.nih.gov/pubmed/37441157
http://dx.doi.org/10.3389/fninf.2023.1099510
work_keys_str_mv AT arthurbenjaminj ascalableimplementationoftherecursiveleastsquaresalgorithmfortrainingspikingneuralnetworks
AT kimchristopherm ascalableimplementationoftherecursiveleastsquaresalgorithmfortrainingspikingneuralnetworks
AT chensusu ascalableimplementationoftherecursiveleastsquaresalgorithmfortrainingspikingneuralnetworks
AT preibischstephan ascalableimplementationoftherecursiveleastsquaresalgorithmfortrainingspikingneuralnetworks
AT darshanran ascalableimplementationoftherecursiveleastsquaresalgorithmfortrainingspikingneuralnetworks
AT arthurbenjaminj scalableimplementationoftherecursiveleastsquaresalgorithmfortrainingspikingneuralnetworks
AT kimchristopherm scalableimplementationoftherecursiveleastsquaresalgorithmfortrainingspikingneuralnetworks
AT chensusu scalableimplementationoftherecursiveleastsquaresalgorithmfortrainingspikingneuralnetworks
AT preibischstephan scalableimplementationoftherecursiveleastsquaresalgorithmfortrainingspikingneuralnetworks
AT darshanran scalableimplementationoftherecursiveleastsquaresalgorithmfortrainingspikingneuralnetworks