Cargando…
Fine-Tuning and the Stability of Recurrent Neural Networks
A central criticism of standard theoretical approaches to constructing stable, recurrent model networks is that the synaptic connection weights need to be finely-tuned. This criticism is severe because proposed rules for learning these weights have been shown to have various limitations to their bio...
Autores principales: | , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Public Library of Science
2011
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3181247/ https://www.ncbi.nlm.nih.gov/pubmed/21980334 http://dx.doi.org/10.1371/journal.pone.0022885 |
_version_ | 1782212740315611136 |
---|---|
author | MacNeil, David Eliasmith, Chris |
author_facet | MacNeil, David Eliasmith, Chris |
author_sort | MacNeil, David |
collection | PubMed |
description | A central criticism of standard theoretical approaches to constructing stable, recurrent model networks is that the synaptic connection weights need to be finely-tuned. This criticism is severe because proposed rules for learning these weights have been shown to have various limitations to their biological plausibility. Hence it is unlikely that such rules are used to continuously fine-tune the network in vivo. We describe a learning rule that is able to tune synaptic weights in a biologically plausible manner. We demonstrate and test this rule in the context of the oculomotor integrator, showing that only known neural signals are needed to tune the weights. We demonstrate that the rule appropriately accounts for a wide variety of experimental results, and is robust under several kinds of perturbation. Furthermore, we show that the rule is able to achieve stability as good as or better than that provided by the linearly optimal weights often used in recurrent models of the integrator. Finally, we discuss how this rule can be generalized to tune a wide variety of recurrent attractor networks, such as those found in head direction and path integration systems, suggesting that it may be used to tune a wide variety of stable neural systems. |
format | Online Article Text |
id | pubmed-3181247 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2011 |
publisher | Public Library of Science |
record_format | MEDLINE/PubMed |
spelling | pubmed-31812472011-10-06 Fine-Tuning and the Stability of Recurrent Neural Networks MacNeil, David Eliasmith, Chris PLoS One Research Article A central criticism of standard theoretical approaches to constructing stable, recurrent model networks is that the synaptic connection weights need to be finely-tuned. This criticism is severe because proposed rules for learning these weights have been shown to have various limitations to their biological plausibility. Hence it is unlikely that such rules are used to continuously fine-tune the network in vivo. We describe a learning rule that is able to tune synaptic weights in a biologically plausible manner. We demonstrate and test this rule in the context of the oculomotor integrator, showing that only known neural signals are needed to tune the weights. We demonstrate that the rule appropriately accounts for a wide variety of experimental results, and is robust under several kinds of perturbation. Furthermore, we show that the rule is able to achieve stability as good as or better than that provided by the linearly optimal weights often used in recurrent models of the integrator. Finally, we discuss how this rule can be generalized to tune a wide variety of recurrent attractor networks, such as those found in head direction and path integration systems, suggesting that it may be used to tune a wide variety of stable neural systems. Public Library of Science 2011-09-27 /pmc/articles/PMC3181247/ /pubmed/21980334 http://dx.doi.org/10.1371/journal.pone.0022885 Text en MacNeil, Eliasmith. http://creativecommons.org/licenses/by/4.0/ This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are properly credited. |
spellingShingle | Research Article MacNeil, David Eliasmith, Chris Fine-Tuning and the Stability of Recurrent Neural Networks |
title | Fine-Tuning and the Stability of Recurrent Neural Networks |
title_full | Fine-Tuning and the Stability of Recurrent Neural Networks |
title_fullStr | Fine-Tuning and the Stability of Recurrent Neural Networks |
title_full_unstemmed | Fine-Tuning and the Stability of Recurrent Neural Networks |
title_short | Fine-Tuning and the Stability of Recurrent Neural Networks |
title_sort | fine-tuning and the stability of recurrent neural networks |
topic | Research Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3181247/ https://www.ncbi.nlm.nih.gov/pubmed/21980334 http://dx.doi.org/10.1371/journal.pone.0022885 |
work_keys_str_mv | AT macneildavid finetuningandthestabilityofrecurrentneuralnetworks AT eliasmithchris finetuningandthestabilityofrecurrentneuralnetworks |