Cargando…

Geometric framework to predict structure from function in neural networks

Neural computation in biological and artificial networks relies on the nonlinear summation of many inputs. The structural connectivity matrix of synaptic weights between neurons is a critical determinant of overall network function, but quantitative links between neural network structure and functio...

Descripción completa

Detalles Bibliográficos
Autores principales: Biswas, Tirthabir, Fitzgerald, James E.
Formato: Online Artículo Texto
Lenguaje:English
Publicado: 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10456994/
https://www.ncbi.nlm.nih.gov/pubmed/37635906
http://dx.doi.org/10.1103/physrevresearch.4.023255
_version_ 1785096832074907648
author Biswas, Tirthabir
Fitzgerald, James E.
author_facet Biswas, Tirthabir
Fitzgerald, James E.
author_sort Biswas, Tirthabir
collection PubMed
description Neural computation in biological and artificial networks relies on the nonlinear summation of many inputs. The structural connectivity matrix of synaptic weights between neurons is a critical determinant of overall network function, but quantitative links between neural network structure and function are complex and subtle. For example, many networks can give rise to similar functional responses, and the same network can function differently depending on context. Whether certain patterns of synaptic connectivity are required to generate specific network-level computations is largely unknown. Here we introduce a geometric framework for identifying synaptic connections required by steady-state responses in recurrent networks of threshold-linear neurons. Assuming that the number of specified response patterns does not exceed the number of input synapses, we analytically calculate the solution space of all feedforward and recurrent connectivity matrices that can generate the specified responses from the network inputs. A generalization accounting for noise further reveals that the solution space geometry can undergo topological transitions as the allowed error increases, which could provide insight into both neuroscience and machine learning. We ultimately use this geometric characterization to derive certainty conditions guaranteeing a nonzero synapse between neurons. Our theoretical framework could thus be applied to neural activity data to make rigorous anatomical predictions that follow generally from the model architecture.
format Online
Article
Text
id pubmed-10456994
institution National Center for Biotechnology Information
language English
publishDate 2022
record_format MEDLINE/PubMed
spelling pubmed-104569942023-08-25 Geometric framework to predict structure from function in neural networks Biswas, Tirthabir Fitzgerald, James E. Phys Rev Res Article Neural computation in biological and artificial networks relies on the nonlinear summation of many inputs. The structural connectivity matrix of synaptic weights between neurons is a critical determinant of overall network function, but quantitative links between neural network structure and function are complex and subtle. For example, many networks can give rise to similar functional responses, and the same network can function differently depending on context. Whether certain patterns of synaptic connectivity are required to generate specific network-level computations is largely unknown. Here we introduce a geometric framework for identifying synaptic connections required by steady-state responses in recurrent networks of threshold-linear neurons. Assuming that the number of specified response patterns does not exceed the number of input synapses, we analytically calculate the solution space of all feedforward and recurrent connectivity matrices that can generate the specified responses from the network inputs. A generalization accounting for noise further reveals that the solution space geometry can undergo topological transitions as the allowed error increases, which could provide insight into both neuroscience and machine learning. We ultimately use this geometric characterization to derive certainty conditions guaranteeing a nonzero synapse between neurons. Our theoretical framework could thus be applied to neural activity data to make rigorous anatomical predictions that follow generally from the model architecture. 2022 2022-06-22 /pmc/articles/PMC10456994/ /pubmed/37635906 http://dx.doi.org/10.1103/physrevresearch.4.023255 Text en https://creativecommons.org/licenses/by/4.0/Published by the American Physical Society under the terms of the Creative Commons Attribution 4.0 International (https://creativecommons.org/licenses/by/4.0/) license. Further distribution of this work must maintain attribution to the author(s) and the published article’s title, journal citation, and DOI. https://creativecommons.org/licenses/by/4.0/This work is licensed under a Creative Commons Attribution 4.0 International License (https://creativecommons.org/licenses/by/4.0/) , which allows reusers to distribute, remix, adapt, and build upon the material in any medium or format, so long as attribution is given to the creator. The license allows for commercial use.
spellingShingle Article
Biswas, Tirthabir
Fitzgerald, James E.
Geometric framework to predict structure from function in neural networks
title Geometric framework to predict structure from function in neural networks
title_full Geometric framework to predict structure from function in neural networks
title_fullStr Geometric framework to predict structure from function in neural networks
title_full_unstemmed Geometric framework to predict structure from function in neural networks
title_short Geometric framework to predict structure from function in neural networks
title_sort geometric framework to predict structure from function in neural networks
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10456994/
https://www.ncbi.nlm.nih.gov/pubmed/37635906
http://dx.doi.org/10.1103/physrevresearch.4.023255
work_keys_str_mv AT biswastirthabir geometricframeworktopredictstructurefromfunctioninneuralnetworks
AT fitzgeraldjamese geometricframeworktopredictstructurefromfunctioninneuralnetworks