Cargando…

An Appraisal of Incremental Learning Methods

As a special case of machine learning, incremental learning can acquire useful knowledge from incoming data continuously while it does not need to access the original data. It is expected to have the ability of memorization and it is regarded as one of the ultimate goals of artificial intelligence t...

Descripción completa

Detalles Bibliográficos
Autores principales: Luo, Yong, Yin, Liancheng, Bai, Wenchao, Mao, Keming
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2020
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7712976/
https://www.ncbi.nlm.nih.gov/pubmed/33286958
http://dx.doi.org/10.3390/e22111190
_version_ 1783618489251004416
author Luo, Yong
Yin, Liancheng
Bai, Wenchao
Mao, Keming
author_facet Luo, Yong
Yin, Liancheng
Bai, Wenchao
Mao, Keming
author_sort Luo, Yong
collection PubMed
description As a special case of machine learning, incremental learning can acquire useful knowledge from incoming data continuously while it does not need to access the original data. It is expected to have the ability of memorization and it is regarded as one of the ultimate goals of artificial intelligence technology. However, incremental learning remains a long term challenge. Modern deep neural network models achieve outstanding performance on stationary data distributions with batch training. This restriction leads to catastrophic forgetting for incremental learning scenarios since the distribution of incoming data is unknown and has a highly different probability from the old data. Therefore, a model must be both plastic to acquire new knowledge and stable to consolidate existing knowledge. This review aims to draw a systematic review of the state of the art of incremental learning methods. Published reports are selected from Web of Science, IEEEXplore, and DBLP databases up to May 2020. Each paper is reviewed according to the types: architectural strategy, regularization strategy and rehearsal and pseudo-rehearsal strategy. We compare and discuss different methods. Moreover, the development trend and research focus are given. It is concluded that incremental learning is still a hot research area and will be for a long period. More attention should be paid to the exploration of both biological systems and computational models.
format Online
Article
Text
id pubmed-7712976
institution National Center for Biotechnology Information
language English
publishDate 2020
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-77129762021-02-24 An Appraisal of Incremental Learning Methods Luo, Yong Yin, Liancheng Bai, Wenchao Mao, Keming Entropy (Basel) Review As a special case of machine learning, incremental learning can acquire useful knowledge from incoming data continuously while it does not need to access the original data. It is expected to have the ability of memorization and it is regarded as one of the ultimate goals of artificial intelligence technology. However, incremental learning remains a long term challenge. Modern deep neural network models achieve outstanding performance on stationary data distributions with batch training. This restriction leads to catastrophic forgetting for incremental learning scenarios since the distribution of incoming data is unknown and has a highly different probability from the old data. Therefore, a model must be both plastic to acquire new knowledge and stable to consolidate existing knowledge. This review aims to draw a systematic review of the state of the art of incremental learning methods. Published reports are selected from Web of Science, IEEEXplore, and DBLP databases up to May 2020. Each paper is reviewed according to the types: architectural strategy, regularization strategy and rehearsal and pseudo-rehearsal strategy. We compare and discuss different methods. Moreover, the development trend and research focus are given. It is concluded that incremental learning is still a hot research area and will be for a long period. More attention should be paid to the exploration of both biological systems and computational models. MDPI 2020-10-22 /pmc/articles/PMC7712976/ /pubmed/33286958 http://dx.doi.org/10.3390/e22111190 Text en © 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
spellingShingle Review
Luo, Yong
Yin, Liancheng
Bai, Wenchao
Mao, Keming
An Appraisal of Incremental Learning Methods
title An Appraisal of Incremental Learning Methods
title_full An Appraisal of Incremental Learning Methods
title_fullStr An Appraisal of Incremental Learning Methods
title_full_unstemmed An Appraisal of Incremental Learning Methods
title_short An Appraisal of Incremental Learning Methods
title_sort appraisal of incremental learning methods
topic Review
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7712976/
https://www.ncbi.nlm.nih.gov/pubmed/33286958
http://dx.doi.org/10.3390/e22111190
work_keys_str_mv AT luoyong anappraisalofincrementallearningmethods
AT yinliancheng anappraisalofincrementallearningmethods
AT baiwenchao anappraisalofincrementallearningmethods
AT maokeming anappraisalofincrementallearningmethods
AT luoyong appraisalofincrementallearningmethods
AT yinliancheng appraisalofincrementallearningmethods
AT baiwenchao appraisalofincrementallearningmethods
AT maokeming appraisalofincrementallearningmethods