Cargando…

Hybrid Fine-Tuning Strategy for Few-Shot Classification

Few-shot classification aims to enable the network to acquire the ability of feature extraction and label prediction for the target categories given a few numbers of labeled samples. Current few-shot classification methods focus on the pretraining stage while fine-tuning by experience or not at all....

Descripción completa

Detalles Bibliográficos
Autores principales: Zhao, Lei, Ou, Zhonghua, Zhang, Lixun, Li, Shuxiao
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Hindawi 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9569229/
https://www.ncbi.nlm.nih.gov/pubmed/36254202
http://dx.doi.org/10.1155/2022/9620755
_version_ 1784809815322656768
author Zhao, Lei
Ou, Zhonghua
Zhang, Lixun
Li, Shuxiao
author_facet Zhao, Lei
Ou, Zhonghua
Zhang, Lixun
Li, Shuxiao
author_sort Zhao, Lei
collection PubMed
description Few-shot classification aims to enable the network to acquire the ability of feature extraction and label prediction for the target categories given a few numbers of labeled samples. Current few-shot classification methods focus on the pretraining stage while fine-tuning by experience or not at all. No fine-tuning or insufficient fine-tuning may get low accuracy for the given tasks, while excessive fine-tuning will lead to poor generalization for unseen samples. To solve the above problems, this study proposes a hybrid fine-tuning strategy (HFT), including a few-shot linear discriminant analysis module (FSLDA) and an adaptive fine-tuning module (AFT). FSLDA constructs the optimal linear classification function under the few-shot conditions to initialize the last fully connected layer parameters, which fully excavates the professional knowledge of the given tasks and guarantees the lower bound of the model accuracy. AFT adopts an adaptive fine-tuning termination rule to obtain the optimal training epochs to prevent the model from overfitting. AFT is also built on FSLDA and outputs the final optimum hybrid fine-tuning strategy for a given sample size and layer frozen policy. We conducted extensive experiments on mini-ImageNet and tiered-ImageNet to prove the effectiveness of our proposed method. It achieves consistent performance improvements compared to existing fine-tuning methods under different sample sizes, layer frozen policies, and few-shot classification frameworks.
format Online
Article
Text
id pubmed-9569229
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher Hindawi
record_format MEDLINE/PubMed
spelling pubmed-95692292022-10-16 Hybrid Fine-Tuning Strategy for Few-Shot Classification Zhao, Lei Ou, Zhonghua Zhang, Lixun Li, Shuxiao Comput Intell Neurosci Research Article Few-shot classification aims to enable the network to acquire the ability of feature extraction and label prediction for the target categories given a few numbers of labeled samples. Current few-shot classification methods focus on the pretraining stage while fine-tuning by experience or not at all. No fine-tuning or insufficient fine-tuning may get low accuracy for the given tasks, while excessive fine-tuning will lead to poor generalization for unseen samples. To solve the above problems, this study proposes a hybrid fine-tuning strategy (HFT), including a few-shot linear discriminant analysis module (FSLDA) and an adaptive fine-tuning module (AFT). FSLDA constructs the optimal linear classification function under the few-shot conditions to initialize the last fully connected layer parameters, which fully excavates the professional knowledge of the given tasks and guarantees the lower bound of the model accuracy. AFT adopts an adaptive fine-tuning termination rule to obtain the optimal training epochs to prevent the model from overfitting. AFT is also built on FSLDA and outputs the final optimum hybrid fine-tuning strategy for a given sample size and layer frozen policy. We conducted extensive experiments on mini-ImageNet and tiered-ImageNet to prove the effectiveness of our proposed method. It achieves consistent performance improvements compared to existing fine-tuning methods under different sample sizes, layer frozen policies, and few-shot classification frameworks. Hindawi 2022-10-08 /pmc/articles/PMC9569229/ /pubmed/36254202 http://dx.doi.org/10.1155/2022/9620755 Text en Copyright © 2022 Lei Zhao et al. https://creativecommons.org/licenses/by/4.0/This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
spellingShingle Research Article
Zhao, Lei
Ou, Zhonghua
Zhang, Lixun
Li, Shuxiao
Hybrid Fine-Tuning Strategy for Few-Shot Classification
title Hybrid Fine-Tuning Strategy for Few-Shot Classification
title_full Hybrid Fine-Tuning Strategy for Few-Shot Classification
title_fullStr Hybrid Fine-Tuning Strategy for Few-Shot Classification
title_full_unstemmed Hybrid Fine-Tuning Strategy for Few-Shot Classification
title_short Hybrid Fine-Tuning Strategy for Few-Shot Classification
title_sort hybrid fine-tuning strategy for few-shot classification
topic Research Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9569229/
https://www.ncbi.nlm.nih.gov/pubmed/36254202
http://dx.doi.org/10.1155/2022/9620755
work_keys_str_mv AT zhaolei hybridfinetuningstrategyforfewshotclassification
AT ouzhonghua hybridfinetuningstrategyforfewshotclassification
AT zhanglixun hybridfinetuningstrategyforfewshotclassification
AT lishuxiao hybridfinetuningstrategyforfewshotclassification