Cargando…
Sequence learning, prediction, and replay in networks of spiking neurons
Sequence learning, prediction and replay have been proposed to constitute the universal computations performed by the neocortex. The Hierarchical Temporal Memory (HTM) algorithm realizes these forms of computation. It learns sequences in an unsupervised and continuous manner using local learning rul...
Autores principales: | , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Public Library of Science
2022
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9273101/ https://www.ncbi.nlm.nih.gov/pubmed/35727857 http://dx.doi.org/10.1371/journal.pcbi.1010233 |
_version_ | 1784745003163058176 |
---|---|
author | Bouhadjar, Younes Wouters, Dirk J. Diesmann, Markus Tetzlaff, Tom |
author_facet | Bouhadjar, Younes Wouters, Dirk J. Diesmann, Markus Tetzlaff, Tom |
author_sort | Bouhadjar, Younes |
collection | PubMed |
description | Sequence learning, prediction and replay have been proposed to constitute the universal computations performed by the neocortex. The Hierarchical Temporal Memory (HTM) algorithm realizes these forms of computation. It learns sequences in an unsupervised and continuous manner using local learning rules, permits a context specific prediction of future sequence elements, and generates mismatch signals in case the predictions are not met. While the HTM algorithm accounts for a number of biological features such as topographic receptive fields, nonlinear dendritic processing, and sparse connectivity, it is based on abstract discrete-time neuron and synapse dynamics, as well as on plasticity mechanisms that can only partly be related to known biological mechanisms. Here, we devise a continuous-time implementation of the temporal-memory (TM) component of the HTM algorithm, which is based on a recurrent network of spiking neurons with biophysically interpretable variables and parameters. The model learns high-order sequences by means of a structural Hebbian synaptic plasticity mechanism supplemented with a rate-based homeostatic control. In combination with nonlinear dendritic input integration and local inhibitory feedback, this type of plasticity leads to the dynamic self-organization of narrow sequence-specific subnetworks. These subnetworks provide the substrate for a faithful propagation of sparse, synchronous activity, and, thereby, for a robust, context specific prediction of future sequence elements as well as for the autonomous replay of previously learned sequences. By strengthening the link to biology, our implementation facilitates the evaluation of the TM hypothesis based on experimentally accessible quantities. The continuous-time implementation of the TM algorithm permits, in particular, an investigation of the role of sequence timing for sequence learning, prediction and replay. We demonstrate this aspect by studying the effect of the sequence speed on the sequence learning performance and on the speed of autonomous sequence replay. |
format | Online Article Text |
id | pubmed-9273101 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2022 |
publisher | Public Library of Science |
record_format | MEDLINE/PubMed |
spelling | pubmed-92731012022-07-12 Sequence learning, prediction, and replay in networks of spiking neurons Bouhadjar, Younes Wouters, Dirk J. Diesmann, Markus Tetzlaff, Tom PLoS Comput Biol Research Article Sequence learning, prediction and replay have been proposed to constitute the universal computations performed by the neocortex. The Hierarchical Temporal Memory (HTM) algorithm realizes these forms of computation. It learns sequences in an unsupervised and continuous manner using local learning rules, permits a context specific prediction of future sequence elements, and generates mismatch signals in case the predictions are not met. While the HTM algorithm accounts for a number of biological features such as topographic receptive fields, nonlinear dendritic processing, and sparse connectivity, it is based on abstract discrete-time neuron and synapse dynamics, as well as on plasticity mechanisms that can only partly be related to known biological mechanisms. Here, we devise a continuous-time implementation of the temporal-memory (TM) component of the HTM algorithm, which is based on a recurrent network of spiking neurons with biophysically interpretable variables and parameters. The model learns high-order sequences by means of a structural Hebbian synaptic plasticity mechanism supplemented with a rate-based homeostatic control. In combination with nonlinear dendritic input integration and local inhibitory feedback, this type of plasticity leads to the dynamic self-organization of narrow sequence-specific subnetworks. These subnetworks provide the substrate for a faithful propagation of sparse, synchronous activity, and, thereby, for a robust, context specific prediction of future sequence elements as well as for the autonomous replay of previously learned sequences. By strengthening the link to biology, our implementation facilitates the evaluation of the TM hypothesis based on experimentally accessible quantities. The continuous-time implementation of the TM algorithm permits, in particular, an investigation of the role of sequence timing for sequence learning, prediction and replay. We demonstrate this aspect by studying the effect of the sequence speed on the sequence learning performance and on the speed of autonomous sequence replay. Public Library of Science 2022-06-21 /pmc/articles/PMC9273101/ /pubmed/35727857 http://dx.doi.org/10.1371/journal.pcbi.1010233 Text en © 2022 Bouhadjar et al https://creativecommons.org/licenses/by/4.0/This is an open access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/) , which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. |
spellingShingle | Research Article Bouhadjar, Younes Wouters, Dirk J. Diesmann, Markus Tetzlaff, Tom Sequence learning, prediction, and replay in networks of spiking neurons |
title | Sequence learning, prediction, and replay in networks of spiking neurons |
title_full | Sequence learning, prediction, and replay in networks of spiking neurons |
title_fullStr | Sequence learning, prediction, and replay in networks of spiking neurons |
title_full_unstemmed | Sequence learning, prediction, and replay in networks of spiking neurons |
title_short | Sequence learning, prediction, and replay in networks of spiking neurons |
title_sort | sequence learning, prediction, and replay in networks of spiking neurons |
topic | Research Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9273101/ https://www.ncbi.nlm.nih.gov/pubmed/35727857 http://dx.doi.org/10.1371/journal.pcbi.1010233 |
work_keys_str_mv | AT bouhadjaryounes sequencelearningpredictionandreplayinnetworksofspikingneurons AT woutersdirkj sequencelearningpredictionandreplayinnetworksofspikingneurons AT diesmannmarkus sequencelearningpredictionandreplayinnetworksofspikingneurons AT tetzlafftom sequencelearningpredictionandreplayinnetworksofspikingneurons |