Cargando…

Few Shot Class Incremental Learning via Efficient Prototype Replay and Calibration

Few shot class incremental learning (FSCIL) is an extremely challenging but valuable problem in real-world applications. When faced with novel few shot tasks in each incremental stage, it should take into account both catastrophic forgetting of old knowledge and overfitting of new categories with li...

Descripción completa

Detalles Bibliográficos
Autores principales: Zhang, Wei, Gu, Xiaodong
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10217101/
https://www.ncbi.nlm.nih.gov/pubmed/37238532
http://dx.doi.org/10.3390/e25050776
_version_ 1785048455394099200
author Zhang, Wei
Gu, Xiaodong
author_facet Zhang, Wei
Gu, Xiaodong
author_sort Zhang, Wei
collection PubMed
description Few shot class incremental learning (FSCIL) is an extremely challenging but valuable problem in real-world applications. When faced with novel few shot tasks in each incremental stage, it should take into account both catastrophic forgetting of old knowledge and overfitting of new categories with limited training data. In this paper, we propose an efficient prototype replay and calibration (EPRC) method with three stages to improve classification performance. We first perform effective pre-training with rotation and mix-up augmentations in order to obtain a strong backbone. Then a series of pseudo few shot tasks are sampled to perform meta-training, which enhances the generalization ability of both the feature extractor and projection layer and then helps mitigate the over-fitting problem of few shot learning. Furthermore, an even nonlinear transformation function is incorporated into the similarity computation to implicitly calibrate the generated prototypes of different categories and alleviate correlations among them. Finally, we replay the stored prototypes to relieve catastrophic forgetting and rectify prototypes to be more discriminative in the incremental-training stage via an explicit regularization within the loss function. The experimental results on CIFAR-100 and miniImageNet demonstrate that our EPRC significantly boosts the classification performance compared with existing mainstream FSCIL methods.
format Online
Article
Text
id pubmed-10217101
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-102171012023-05-27 Few Shot Class Incremental Learning via Efficient Prototype Replay and Calibration Zhang, Wei Gu, Xiaodong Entropy (Basel) Article Few shot class incremental learning (FSCIL) is an extremely challenging but valuable problem in real-world applications. When faced with novel few shot tasks in each incremental stage, it should take into account both catastrophic forgetting of old knowledge and overfitting of new categories with limited training data. In this paper, we propose an efficient prototype replay and calibration (EPRC) method with three stages to improve classification performance. We first perform effective pre-training with rotation and mix-up augmentations in order to obtain a strong backbone. Then a series of pseudo few shot tasks are sampled to perform meta-training, which enhances the generalization ability of both the feature extractor and projection layer and then helps mitigate the over-fitting problem of few shot learning. Furthermore, an even nonlinear transformation function is incorporated into the similarity computation to implicitly calibrate the generated prototypes of different categories and alleviate correlations among them. Finally, we replay the stored prototypes to relieve catastrophic forgetting and rectify prototypes to be more discriminative in the incremental-training stage via an explicit regularization within the loss function. The experimental results on CIFAR-100 and miniImageNet demonstrate that our EPRC significantly boosts the classification performance compared with existing mainstream FSCIL methods. MDPI 2023-05-10 /pmc/articles/PMC10217101/ /pubmed/37238532 http://dx.doi.org/10.3390/e25050776 Text en © 2023 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Zhang, Wei
Gu, Xiaodong
Few Shot Class Incremental Learning via Efficient Prototype Replay and Calibration
title Few Shot Class Incremental Learning via Efficient Prototype Replay and Calibration
title_full Few Shot Class Incremental Learning via Efficient Prototype Replay and Calibration
title_fullStr Few Shot Class Incremental Learning via Efficient Prototype Replay and Calibration
title_full_unstemmed Few Shot Class Incremental Learning via Efficient Prototype Replay and Calibration
title_short Few Shot Class Incremental Learning via Efficient Prototype Replay and Calibration
title_sort few shot class incremental learning via efficient prototype replay and calibration
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10217101/
https://www.ncbi.nlm.nih.gov/pubmed/37238532
http://dx.doi.org/10.3390/e25050776
work_keys_str_mv AT zhangwei fewshotclassincrementallearningviaefficientprototypereplayandcalibration
AT guxiaodong fewshotclassincrementallearningviaefficientprototypereplayandcalibration