Cargando…

Brain-inspired replay for continual learning with artificial neural networks

Artificial neural networks suffer from catastrophic forgetting. Unlike humans, when these networks are trained on something new, they rapidly forget what was learned before. In the brain, a mechanism thought to be important for protecting memories is the reactivation of neuronal activity patterns re...

Descripción completa

Detalles Bibliográficos
Autores principales: van de Ven, Gido M., Siegelmann, Hava T., Tolias, Andreas S.
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Nature Publishing Group UK 2020
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7426273/
https://www.ncbi.nlm.nih.gov/pubmed/32792531
http://dx.doi.org/10.1038/s41467-020-17866-2
_version_ 1783570646757801984
author van de Ven, Gido M.
Siegelmann, Hava T.
Tolias, Andreas S.
author_facet van de Ven, Gido M.
Siegelmann, Hava T.
Tolias, Andreas S.
author_sort van de Ven, Gido M.
collection PubMed
description Artificial neural networks suffer from catastrophic forgetting. Unlike humans, when these networks are trained on something new, they rapidly forget what was learned before. In the brain, a mechanism thought to be important for protecting memories is the reactivation of neuronal activity patterns representing those memories. In artificial neural networks, such memory replay can be implemented as ‘generative replay’, which can successfully – and surprisingly efficiently – prevent catastrophic forgetting on toy examples even in a class-incremental learning scenario. However, scaling up generative replay to complicated problems with many tasks or complex inputs is challenging. We propose a new, brain-inspired variant of replay in which internal or hidden representations are replayed that are generated by the network’s own, context-modulated feedback connections. Our method achieves state-of-the-art performance on challenging continual learning benchmarks (e.g., class-incremental learning on CIFAR-100) without storing data, and it provides a novel model for replay in the brain.
format Online
Article
Text
id pubmed-7426273
institution National Center for Biotechnology Information
language English
publishDate 2020
publisher Nature Publishing Group UK
record_format MEDLINE/PubMed
spelling pubmed-74262732020-08-18 Brain-inspired replay for continual learning with artificial neural networks van de Ven, Gido M. Siegelmann, Hava T. Tolias, Andreas S. Nat Commun Article Artificial neural networks suffer from catastrophic forgetting. Unlike humans, when these networks are trained on something new, they rapidly forget what was learned before. In the brain, a mechanism thought to be important for protecting memories is the reactivation of neuronal activity patterns representing those memories. In artificial neural networks, such memory replay can be implemented as ‘generative replay’, which can successfully – and surprisingly efficiently – prevent catastrophic forgetting on toy examples even in a class-incremental learning scenario. However, scaling up generative replay to complicated problems with many tasks or complex inputs is challenging. We propose a new, brain-inspired variant of replay in which internal or hidden representations are replayed that are generated by the network’s own, context-modulated feedback connections. Our method achieves state-of-the-art performance on challenging continual learning benchmarks (e.g., class-incremental learning on CIFAR-100) without storing data, and it provides a novel model for replay in the brain. Nature Publishing Group UK 2020-08-13 /pmc/articles/PMC7426273/ /pubmed/32792531 http://dx.doi.org/10.1038/s41467-020-17866-2 Text en © The Author(s) 2020 Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/.
spellingShingle Article
van de Ven, Gido M.
Siegelmann, Hava T.
Tolias, Andreas S.
Brain-inspired replay for continual learning with artificial neural networks
title Brain-inspired replay for continual learning with artificial neural networks
title_full Brain-inspired replay for continual learning with artificial neural networks
title_fullStr Brain-inspired replay for continual learning with artificial neural networks
title_full_unstemmed Brain-inspired replay for continual learning with artificial neural networks
title_short Brain-inspired replay for continual learning with artificial neural networks
title_sort brain-inspired replay for continual learning with artificial neural networks
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7426273/
https://www.ncbi.nlm.nih.gov/pubmed/32792531
http://dx.doi.org/10.1038/s41467-020-17866-2
work_keys_str_mv AT vandevengidom braininspiredreplayforcontinuallearningwithartificialneuralnetworks
AT siegelmannhavat braininspiredreplayforcontinuallearningwithartificialneuralnetworks
AT toliasandreass braininspiredreplayforcontinuallearningwithartificialneuralnetworks