Cargando…
From statistical inference to a differential learning rule for stochastic neural networks
Stochastic neural networks are a prototypical computational device able to build a probabilistic representation of an ensemble of external stimuli. Building on the relationship between inference and learning, we derive a synaptic plasticity rule that relies only on delayed activity correlations, and...
Autores principales: | , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
The Royal Society
2018
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6227809/ https://www.ncbi.nlm.nih.gov/pubmed/30443331 http://dx.doi.org/10.1098/rsfs.2018.0033 |
_version_ | 1783370001098473472 |
---|---|
author | Saglietti, Luca Gerace, Federica Ingrosso, Alessandro Baldassi, Carlo Zecchina, Riccardo |
author_facet | Saglietti, Luca Gerace, Federica Ingrosso, Alessandro Baldassi, Carlo Zecchina, Riccardo |
author_sort | Saglietti, Luca |
collection | PubMed |
description | Stochastic neural networks are a prototypical computational device able to build a probabilistic representation of an ensemble of external stimuli. Building on the relationship between inference and learning, we derive a synaptic plasticity rule that relies only on delayed activity correlations, and that shows a number of remarkable features. Our delayed-correlations matching (DCM) rule satisfies some basic requirements for biological feasibility: finite and noisy afferent signals, Dale’s principle and asymmetry of synaptic connections, locality of the weight update computations. Nevertheless, the DCM rule is capable of storing a large, extensive number of patterns as attractors in a stochastic recurrent neural network, under general scenarios without requiring any modification: it can deal with correlated patterns, a broad range of architectures (with or without hidden neuronal states), one-shot learning with the palimpsest property, all the while avoiding the proliferation of spurious attractors. When hidden units are present, our learning rule can be employed to construct Boltzmann machine-like generative models, exploiting the addition of hidden neurons in feature extraction and classification tasks. |
format | Online Article Text |
id | pubmed-6227809 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2018 |
publisher | The Royal Society |
record_format | MEDLINE/PubMed |
spelling | pubmed-62278092018-11-15 From statistical inference to a differential learning rule for stochastic neural networks Saglietti, Luca Gerace, Federica Ingrosso, Alessandro Baldassi, Carlo Zecchina, Riccardo Interface Focus Articles Stochastic neural networks are a prototypical computational device able to build a probabilistic representation of an ensemble of external stimuli. Building on the relationship between inference and learning, we derive a synaptic plasticity rule that relies only on delayed activity correlations, and that shows a number of remarkable features. Our delayed-correlations matching (DCM) rule satisfies some basic requirements for biological feasibility: finite and noisy afferent signals, Dale’s principle and asymmetry of synaptic connections, locality of the weight update computations. Nevertheless, the DCM rule is capable of storing a large, extensive number of patterns as attractors in a stochastic recurrent neural network, under general scenarios without requiring any modification: it can deal with correlated patterns, a broad range of architectures (with or without hidden neuronal states), one-shot learning with the palimpsest property, all the while avoiding the proliferation of spurious attractors. When hidden units are present, our learning rule can be employed to construct Boltzmann machine-like generative models, exploiting the addition of hidden neurons in feature extraction and classification tasks. The Royal Society 2018-12-06 2018-10-19 /pmc/articles/PMC6227809/ /pubmed/30443331 http://dx.doi.org/10.1098/rsfs.2018.0033 Text en © 2018 The Authors. http://creativecommons.org/licenses/by/4.0/ Published by the Royal Society under the terms of the Creative Commons Attribution License http://creativecommons.org/licenses/by/4.0/, which permits unrestricted use, provided the original author and source are credited. |
spellingShingle | Articles Saglietti, Luca Gerace, Federica Ingrosso, Alessandro Baldassi, Carlo Zecchina, Riccardo From statistical inference to a differential learning rule for stochastic neural networks |
title | From statistical inference to a differential learning rule for stochastic neural networks |
title_full | From statistical inference to a differential learning rule for stochastic neural networks |
title_fullStr | From statistical inference to a differential learning rule for stochastic neural networks |
title_full_unstemmed | From statistical inference to a differential learning rule for stochastic neural networks |
title_short | From statistical inference to a differential learning rule for stochastic neural networks |
title_sort | from statistical inference to a differential learning rule for stochastic neural networks |
topic | Articles |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6227809/ https://www.ncbi.nlm.nih.gov/pubmed/30443331 http://dx.doi.org/10.1098/rsfs.2018.0033 |
work_keys_str_mv | AT sagliettiluca fromstatisticalinferencetoadifferentiallearningruleforstochasticneuralnetworks AT geracefederica fromstatisticalinferencetoadifferentiallearningruleforstochasticneuralnetworks AT ingrossoalessandro fromstatisticalinferencetoadifferentiallearningruleforstochasticneuralnetworks AT baldassicarlo fromstatisticalinferencetoadifferentiallearningruleforstochasticneuralnetworks AT zecchinariccardo fromstatisticalinferencetoadifferentiallearningruleforstochasticneuralnetworks |