Cargando…

Self-Organized Structuring of Recurrent Neuronal Networks for Reliable Information Transmission

SIMPLE SUMMARY: Information processing in the brain takes places at multiple stages, each of which is a local network of neurons. The long-range connections between these network stages are sparse and do not change over time. Thus, within each stage information arrives at a sparse subset of input ne...

Descripción completa

Detalles Bibliográficos
Autores principales: Miner, Daniel, Wörgötter, Florentin, Tetzlaff, Christian, Fauth, Michael
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2021
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8301101/
https://www.ncbi.nlm.nih.gov/pubmed/34202473
http://dx.doi.org/10.3390/biology10070577
_version_ 1783726594386296832
author Miner, Daniel
Wörgötter, Florentin
Tetzlaff, Christian
Fauth, Michael
author_facet Miner, Daniel
Wörgötter, Florentin
Tetzlaff, Christian
Fauth, Michael
author_sort Miner, Daniel
collection PubMed
description SIMPLE SUMMARY: Information processing in the brain takes places at multiple stages, each of which is a local network of neurons. The long-range connections between these network stages are sparse and do not change over time. Thus, within each stage information arrives at a sparse subset of input neurons and must be routed to a sparse subset of output neurons. In this theoretical work, we investigate how networks achieve this routing in a self-organized manner without losing information. We show that biologically inspired self-organization entails that input information is distributed to all neurons in the network by strengthening many synapses in the local networks. Thus, after successful self-organization, input information can be read out and decoded from a small number of outputs. We also show that this way of self-organization can still be more energy efficient than creating more long-range in- and output connections. ABSTRACT: Our brains process information using a layered hierarchical network architecture, with abundant connections within each layer and sparse long-range connections between layers. As these long-range connections are mostly unchanged after development, each layer has to locally self-organize in response to new inputs to enable information routing between the sparse in- and output connections. Here we demonstrate that this can be achieved by a well-established model of cortical self-organization based on a well-orchestrated interplay between several plasticity processes. After this self-organization, stimuli conveyed by sparse inputs can be rapidly read out from a layer using only very few long-range connections. To achieve this information routing, the neurons that are stimulated form feed-forward projections into the unstimulated parts of the same layer and get more neurons to represent the stimulus. Hereby, the plasticity processes ensure that each neuron only receives projections from and responds to only one stimulus such that the network is partitioned into parts with different preferred stimuli. Along this line, we show that the relation between the network activity and connectivity self-organizes into a biologically plausible regime. Finally, we argue how the emerging connectivity may minimize the metabolic cost for maintaining a network structure that rapidly transmits stimulus information despite sparse input and output connectivity.
format Online
Article
Text
id pubmed-8301101
institution National Center for Biotechnology Information
language English
publishDate 2021
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-83011012021-07-24 Self-Organized Structuring of Recurrent Neuronal Networks for Reliable Information Transmission Miner, Daniel Wörgötter, Florentin Tetzlaff, Christian Fauth, Michael Biology (Basel) Article SIMPLE SUMMARY: Information processing in the brain takes places at multiple stages, each of which is a local network of neurons. The long-range connections between these network stages are sparse and do not change over time. Thus, within each stage information arrives at a sparse subset of input neurons and must be routed to a sparse subset of output neurons. In this theoretical work, we investigate how networks achieve this routing in a self-organized manner without losing information. We show that biologically inspired self-organization entails that input information is distributed to all neurons in the network by strengthening many synapses in the local networks. Thus, after successful self-organization, input information can be read out and decoded from a small number of outputs. We also show that this way of self-organization can still be more energy efficient than creating more long-range in- and output connections. ABSTRACT: Our brains process information using a layered hierarchical network architecture, with abundant connections within each layer and sparse long-range connections between layers. As these long-range connections are mostly unchanged after development, each layer has to locally self-organize in response to new inputs to enable information routing between the sparse in- and output connections. Here we demonstrate that this can be achieved by a well-established model of cortical self-organization based on a well-orchestrated interplay between several plasticity processes. After this self-organization, stimuli conveyed by sparse inputs can be rapidly read out from a layer using only very few long-range connections. To achieve this information routing, the neurons that are stimulated form feed-forward projections into the unstimulated parts of the same layer and get more neurons to represent the stimulus. Hereby, the plasticity processes ensure that each neuron only receives projections from and responds to only one stimulus such that the network is partitioned into parts with different preferred stimuli. Along this line, we show that the relation between the network activity and connectivity self-organizes into a biologically plausible regime. Finally, we argue how the emerging connectivity may minimize the metabolic cost for maintaining a network structure that rapidly transmits stimulus information despite sparse input and output connectivity. MDPI 2021-06-24 /pmc/articles/PMC8301101/ /pubmed/34202473 http://dx.doi.org/10.3390/biology10070577 Text en © 2021 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Miner, Daniel
Wörgötter, Florentin
Tetzlaff, Christian
Fauth, Michael
Self-Organized Structuring of Recurrent Neuronal Networks for Reliable Information Transmission
title Self-Organized Structuring of Recurrent Neuronal Networks for Reliable Information Transmission
title_full Self-Organized Structuring of Recurrent Neuronal Networks for Reliable Information Transmission
title_fullStr Self-Organized Structuring of Recurrent Neuronal Networks for Reliable Information Transmission
title_full_unstemmed Self-Organized Structuring of Recurrent Neuronal Networks for Reliable Information Transmission
title_short Self-Organized Structuring of Recurrent Neuronal Networks for Reliable Information Transmission
title_sort self-organized structuring of recurrent neuronal networks for reliable information transmission
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8301101/
https://www.ncbi.nlm.nih.gov/pubmed/34202473
http://dx.doi.org/10.3390/biology10070577
work_keys_str_mv AT minerdaniel selforganizedstructuringofrecurrentneuronalnetworksforreliableinformationtransmission
AT worgotterflorentin selforganizedstructuringofrecurrentneuronalnetworksforreliableinformationtransmission
AT tetzlaffchristian selforganizedstructuringofrecurrentneuronalnetworksforreliableinformationtransmission
AT fauthmichael selforganizedstructuringofrecurrentneuronalnetworksforreliableinformationtransmission