Cargando…
Training biologically plausible recurrent neural networks on cognitive tasks with long-term dependencies
Training recurrent neural networks (RNNs) has become a go-to approach for generating and evaluating mechanistic neural hypotheses for cognition. The ease and efficiency of training RNNs with backpropagation through time and the availability of robustly supported deep learning libraries has made RNN...
Autores principales: | , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Cold Spring Harbor Laboratory
2023
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10592728/ https://www.ncbi.nlm.nih.gov/pubmed/37873445 http://dx.doi.org/10.1101/2023.10.10.561588 |
_version_ | 1785124334445002752 |
---|---|
author | Soo, Wayne W.M. Goudar, Vishwa Wang, Xiao-Jing |
author_facet | Soo, Wayne W.M. Goudar, Vishwa Wang, Xiao-Jing |
author_sort | Soo, Wayne W.M. |
collection | PubMed |
description | Training recurrent neural networks (RNNs) has become a go-to approach for generating and evaluating mechanistic neural hypotheses for cognition. The ease and efficiency of training RNNs with backpropagation through time and the availability of robustly supported deep learning libraries has made RNN modeling more approachable and accessible to neuroscience. Yet, a major technical hindrance remains. Cognitive processes such as working memory and decision making involve neural population dynamics over a long period of time within a behavioral trial and across trials. It is difficult to train RNNs to accomplish tasks where neural representations and dynamics have long temporal dependencies without gating mechanisms such as LSTMs or GRUs which currently lack experimental support and prohibit direct comparison between RNNs and biological neural circuits. We tackled this problem based on the idea of specialized skip-connections through time to support the emergence of task-relevant dynamics, and subsequently reinstitute biological plausibility by reverting to the original architecture. We show that this approach enables RNNs to successfully learn cognitive tasks that prove impractical if not impossible to learn using conventional methods. Over numerous tasks considered here, we achieve less training steps and shorter wall-clock times, particularly in tasks that require learning long-term dependencies via temporal integration over long timescales or maintaining a memory of past events in hidden-states. Our methods expand the range of experimental tasks that biologically plausible RNN models can learn, thereby supporting the development of theory for the emergent neural mechanisms of computations involving long-term dependencies. |
format | Online Article Text |
id | pubmed-10592728 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2023 |
publisher | Cold Spring Harbor Laboratory |
record_format | MEDLINE/PubMed |
spelling | pubmed-105927282023-10-24 Training biologically plausible recurrent neural networks on cognitive tasks with long-term dependencies Soo, Wayne W.M. Goudar, Vishwa Wang, Xiao-Jing bioRxiv Article Training recurrent neural networks (RNNs) has become a go-to approach for generating and evaluating mechanistic neural hypotheses for cognition. The ease and efficiency of training RNNs with backpropagation through time and the availability of robustly supported deep learning libraries has made RNN modeling more approachable and accessible to neuroscience. Yet, a major technical hindrance remains. Cognitive processes such as working memory and decision making involve neural population dynamics over a long period of time within a behavioral trial and across trials. It is difficult to train RNNs to accomplish tasks where neural representations and dynamics have long temporal dependencies without gating mechanisms such as LSTMs or GRUs which currently lack experimental support and prohibit direct comparison between RNNs and biological neural circuits. We tackled this problem based on the idea of specialized skip-connections through time to support the emergence of task-relevant dynamics, and subsequently reinstitute biological plausibility by reverting to the original architecture. We show that this approach enables RNNs to successfully learn cognitive tasks that prove impractical if not impossible to learn using conventional methods. Over numerous tasks considered here, we achieve less training steps and shorter wall-clock times, particularly in tasks that require learning long-term dependencies via temporal integration over long timescales or maintaining a memory of past events in hidden-states. Our methods expand the range of experimental tasks that biologically plausible RNN models can learn, thereby supporting the development of theory for the emergent neural mechanisms of computations involving long-term dependencies. Cold Spring Harbor Laboratory 2023-10-10 /pmc/articles/PMC10592728/ /pubmed/37873445 http://dx.doi.org/10.1101/2023.10.10.561588 Text en https://creativecommons.org/licenses/by/4.0/This work is licensed under a Creative Commons Attribution 4.0 International License (https://creativecommons.org/licenses/by/4.0/) , which allows reusers to distribute, remix, adapt, and build upon the material in any medium or format, so long as attribution is given to the creator. The license allows for commercial use. |
spellingShingle | Article Soo, Wayne W.M. Goudar, Vishwa Wang, Xiao-Jing Training biologically plausible recurrent neural networks on cognitive tasks with long-term dependencies |
title | Training biologically plausible recurrent neural networks on cognitive tasks with long-term dependencies |
title_full | Training biologically plausible recurrent neural networks on cognitive tasks with long-term dependencies |
title_fullStr | Training biologically plausible recurrent neural networks on cognitive tasks with long-term dependencies |
title_full_unstemmed | Training biologically plausible recurrent neural networks on cognitive tasks with long-term dependencies |
title_short | Training biologically plausible recurrent neural networks on cognitive tasks with long-term dependencies |
title_sort | training biologically plausible recurrent neural networks on cognitive tasks with long-term dependencies |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10592728/ https://www.ncbi.nlm.nih.gov/pubmed/37873445 http://dx.doi.org/10.1101/2023.10.10.561588 |
work_keys_str_mv | AT soowaynewm trainingbiologicallyplausiblerecurrentneuralnetworksoncognitivetaskswithlongtermdependencies AT goudarvishwa trainingbiologicallyplausiblerecurrentneuralnetworksoncognitivetaskswithlongtermdependencies AT wangxiaojing trainingbiologicallyplausiblerecurrentneuralnetworksoncognitivetaskswithlongtermdependencies |