Cargando…

Sparse RNNs can support high-capacity classification

Feedforward network models performing classification tasks rely on highly convergent output units that collect the information passed on by preceding layers. Although convergent output-unit like neurons may exist in some biological neural circuits, notably the cerebellar cortex, neocortical circuits...

Descripción completa

Detalles Bibliográficos
Autores principales: Turcu, Denis, Abbott, L. F.
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Public Library of Science 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9797087/
https://www.ncbi.nlm.nih.gov/pubmed/36516226
http://dx.doi.org/10.1371/journal.pcbi.1010759
_version_ 1784860626861948928
author Turcu, Denis
Abbott, L. F.
author_facet Turcu, Denis
Abbott, L. F.
author_sort Turcu, Denis
collection PubMed
description Feedforward network models performing classification tasks rely on highly convergent output units that collect the information passed on by preceding layers. Although convergent output-unit like neurons may exist in some biological neural circuits, notably the cerebellar cortex, neocortical circuits do not exhibit any obvious candidates for this role; instead they are highly recurrent. We investigate whether a sparsely connected recurrent neural network (RNN) can perform classification in a distributed manner without ever bringing all of the relevant information to a single convergence site. Our model is based on a sparse RNN that performs classification dynamically. Specifically, the interconnections of the RNN are trained to resonantly amplify the magnitude of responses to some external inputs but not others. The amplified and non-amplified responses then form the basis for binary classification. Furthermore, the network acts as an evidence accumulator and maintains its decision even after the input is turned off. Despite highly sparse connectivity, learned recurrent connections allow input information to flow to every neuron of the RNN, providing the basis for distributed computation. In this arrangement, the minimum number of synapses per neuron required to reach maximum memory capacity scales only logarithmically with network size. The model is robust to various types of noise, works with different activation and loss functions and with both backpropagation- and Hebbian-based learning rules. The RNN can also be constructed with a split excitation-inhibition architecture with little reduction in performance.
format Online
Article
Text
id pubmed-9797087
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher Public Library of Science
record_format MEDLINE/PubMed
spelling pubmed-97970872022-12-29 Sparse RNNs can support high-capacity classification Turcu, Denis Abbott, L. F. PLoS Comput Biol Research Article Feedforward network models performing classification tasks rely on highly convergent output units that collect the information passed on by preceding layers. Although convergent output-unit like neurons may exist in some biological neural circuits, notably the cerebellar cortex, neocortical circuits do not exhibit any obvious candidates for this role; instead they are highly recurrent. We investigate whether a sparsely connected recurrent neural network (RNN) can perform classification in a distributed manner without ever bringing all of the relevant information to a single convergence site. Our model is based on a sparse RNN that performs classification dynamically. Specifically, the interconnections of the RNN are trained to resonantly amplify the magnitude of responses to some external inputs but not others. The amplified and non-amplified responses then form the basis for binary classification. Furthermore, the network acts as an evidence accumulator and maintains its decision even after the input is turned off. Despite highly sparse connectivity, learned recurrent connections allow input information to flow to every neuron of the RNN, providing the basis for distributed computation. In this arrangement, the minimum number of synapses per neuron required to reach maximum memory capacity scales only logarithmically with network size. The model is robust to various types of noise, works with different activation and loss functions and with both backpropagation- and Hebbian-based learning rules. The RNN can also be constructed with a split excitation-inhibition architecture with little reduction in performance. Public Library of Science 2022-12-14 /pmc/articles/PMC9797087/ /pubmed/36516226 http://dx.doi.org/10.1371/journal.pcbi.1010759 Text en © 2022 Turcu, Abbott https://creativecommons.org/licenses/by/4.0/This is an open access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/) , which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
spellingShingle Research Article
Turcu, Denis
Abbott, L. F.
Sparse RNNs can support high-capacity classification
title Sparse RNNs can support high-capacity classification
title_full Sparse RNNs can support high-capacity classification
title_fullStr Sparse RNNs can support high-capacity classification
title_full_unstemmed Sparse RNNs can support high-capacity classification
title_short Sparse RNNs can support high-capacity classification
title_sort sparse rnns can support high-capacity classification
topic Research Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9797087/
https://www.ncbi.nlm.nih.gov/pubmed/36516226
http://dx.doi.org/10.1371/journal.pcbi.1010759
work_keys_str_mv AT turcudenis sparsernnscansupporthighcapacityclassification
AT abbottlf sparsernnscansupporthighcapacityclassification