Cargando…

Model architecture can transform catastrophic forgetting into positive transfer

The work of McCloskey and Cohen popularized the concept of catastrophic interference. They used a neural network that tried to learn addition using two groups of examples as two different tasks. In their case, learning the second task rapidly deteriorated the acquired knowledge about the previous on...

Descripción completa

Detalles Bibliográficos
Autor principal: Ruiz-Garcia, Miguel
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Nature Publishing Group UK 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9232654/
https://www.ncbi.nlm.nih.gov/pubmed/35750768
http://dx.doi.org/10.1038/s41598-022-14348-x
_version_ 1784735637117599744
author Ruiz-Garcia, Miguel
author_facet Ruiz-Garcia, Miguel
author_sort Ruiz-Garcia, Miguel
collection PubMed
description The work of McCloskey and Cohen popularized the concept of catastrophic interference. They used a neural network that tried to learn addition using two groups of examples as two different tasks. In their case, learning the second task rapidly deteriorated the acquired knowledge about the previous one. We hypothesize that this could be a symptom of a fundamental problem: addition is an algorithmic task that should not be learned through pattern recognition. Therefore, other model architectures better suited for this task would avoid catastrophic forgetting. We use a neural network with a different architecture that can be trained to recover the correct algorithm for the addition of binary numbers. This neural network includes conditional clauses that are naturally treated within the back-propagation algorithm. We test it in the setting proposed by McCloskey and Cohen and training on random additions one by one. The neural network not only does not suffer from catastrophic forgetting but it improves its predictive power on unseen pairs of numbers as training progresses. We also show that this is a robust effect, also present when averaging many simulations. This work emphasizes the importance that neural network architecture has for the emergence of catastrophic forgetting and introduces a neural network that is able to learn an algorithm.
format Online
Article
Text
id pubmed-9232654
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher Nature Publishing Group UK
record_format MEDLINE/PubMed
spelling pubmed-92326542022-06-26 Model architecture can transform catastrophic forgetting into positive transfer Ruiz-Garcia, Miguel Sci Rep Article The work of McCloskey and Cohen popularized the concept of catastrophic interference. They used a neural network that tried to learn addition using two groups of examples as two different tasks. In their case, learning the second task rapidly deteriorated the acquired knowledge about the previous one. We hypothesize that this could be a symptom of a fundamental problem: addition is an algorithmic task that should not be learned through pattern recognition. Therefore, other model architectures better suited for this task would avoid catastrophic forgetting. We use a neural network with a different architecture that can be trained to recover the correct algorithm for the addition of binary numbers. This neural network includes conditional clauses that are naturally treated within the back-propagation algorithm. We test it in the setting proposed by McCloskey and Cohen and training on random additions one by one. The neural network not only does not suffer from catastrophic forgetting but it improves its predictive power on unseen pairs of numbers as training progresses. We also show that this is a robust effect, also present when averaging many simulations. This work emphasizes the importance that neural network architecture has for the emergence of catastrophic forgetting and introduces a neural network that is able to learn an algorithm. Nature Publishing Group UK 2022-06-24 /pmc/articles/PMC9232654/ /pubmed/35750768 http://dx.doi.org/10.1038/s41598-022-14348-x Text en © The Author(s) 2022 https://creativecommons.org/licenses/by/4.0/Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) .
spellingShingle Article
Ruiz-Garcia, Miguel
Model architecture can transform catastrophic forgetting into positive transfer
title Model architecture can transform catastrophic forgetting into positive transfer
title_full Model architecture can transform catastrophic forgetting into positive transfer
title_fullStr Model architecture can transform catastrophic forgetting into positive transfer
title_full_unstemmed Model architecture can transform catastrophic forgetting into positive transfer
title_short Model architecture can transform catastrophic forgetting into positive transfer
title_sort model architecture can transform catastrophic forgetting into positive transfer
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9232654/
https://www.ncbi.nlm.nih.gov/pubmed/35750768
http://dx.doi.org/10.1038/s41598-022-14348-x
work_keys_str_mv AT ruizgarciamiguel modelarchitecturecantransformcatastrophicforgettingintopositivetransfer