Cargando…
RS-SSKD: Self-Supervision Equipped with Knowledge Distillation for Few-Shot Remote Sensing Scene Classification
While growing instruments generate more and more airborne or satellite images, the bottleneck in remote sensing (RS) scene classification has shifted from data limits toward a lack of ground truth samples. There are still many challenges when we are facing unknown environments, especially those with...
Autores principales: | , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
MDPI
2021
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7956409/ https://www.ncbi.nlm.nih.gov/pubmed/33668138 http://dx.doi.org/10.3390/s21051566 |
_version_ | 1783664428686770176 |
---|---|
author | Zhang, Pei Li, Ying Wang, Dong Wang, Jiyue |
author_facet | Zhang, Pei Li, Ying Wang, Dong Wang, Jiyue |
author_sort | Zhang, Pei |
collection | PubMed |
description | While growing instruments generate more and more airborne or satellite images, the bottleneck in remote sensing (RS) scene classification has shifted from data limits toward a lack of ground truth samples. There are still many challenges when we are facing unknown environments, especially those with insufficient training data. Few-shot classification offers a different picture under the umbrella of meta-learning: digging rich knowledge from a few data are possible. In this work, we propose a method named RS-SSKD for few-shot RS scene classification from a perspective of generating powerful representation for the downstream meta-learner. Firstly, we propose a novel two-branch network that takes three pairs of original-transformed images as inputs and incorporates Class Activation Maps (CAMs) to drive the network mining, the most relevant category-specific region. This strategy ensures that the network generates discriminative embeddings. Secondly, we set a round of self-knowledge distillation to prevent overfitting and boost the performance. Our experiments show that the proposed method surpasses current state-of-the-art approaches on two challenging RS scene datasets: NWPU-RESISC45 and RSD46-WHU. Finally, we conduct various ablation experiments to investigate the effect of each component of the proposed method and analyze the training time of state-of-the-art methods and ours. |
format | Online Article Text |
id | pubmed-7956409 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2021 |
publisher | MDPI |
record_format | MEDLINE/PubMed |
spelling | pubmed-79564092021-03-16 RS-SSKD: Self-Supervision Equipped with Knowledge Distillation for Few-Shot Remote Sensing Scene Classification Zhang, Pei Li, Ying Wang, Dong Wang, Jiyue Sensors (Basel) Article While growing instruments generate more and more airborne or satellite images, the bottleneck in remote sensing (RS) scene classification has shifted from data limits toward a lack of ground truth samples. There are still many challenges when we are facing unknown environments, especially those with insufficient training data. Few-shot classification offers a different picture under the umbrella of meta-learning: digging rich knowledge from a few data are possible. In this work, we propose a method named RS-SSKD for few-shot RS scene classification from a perspective of generating powerful representation for the downstream meta-learner. Firstly, we propose a novel two-branch network that takes three pairs of original-transformed images as inputs and incorporates Class Activation Maps (CAMs) to drive the network mining, the most relevant category-specific region. This strategy ensures that the network generates discriminative embeddings. Secondly, we set a round of self-knowledge distillation to prevent overfitting and boost the performance. Our experiments show that the proposed method surpasses current state-of-the-art approaches on two challenging RS scene datasets: NWPU-RESISC45 and RSD46-WHU. Finally, we conduct various ablation experiments to investigate the effect of each component of the proposed method and analyze the training time of state-of-the-art methods and ours. MDPI 2021-02-24 /pmc/articles/PMC7956409/ /pubmed/33668138 http://dx.doi.org/10.3390/s21051566 Text en © 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/). |
spellingShingle | Article Zhang, Pei Li, Ying Wang, Dong Wang, Jiyue RS-SSKD: Self-Supervision Equipped with Knowledge Distillation for Few-Shot Remote Sensing Scene Classification |
title | RS-SSKD: Self-Supervision Equipped with Knowledge Distillation for Few-Shot Remote Sensing Scene Classification |
title_full | RS-SSKD: Self-Supervision Equipped with Knowledge Distillation for Few-Shot Remote Sensing Scene Classification |
title_fullStr | RS-SSKD: Self-Supervision Equipped with Knowledge Distillation for Few-Shot Remote Sensing Scene Classification |
title_full_unstemmed | RS-SSKD: Self-Supervision Equipped with Knowledge Distillation for Few-Shot Remote Sensing Scene Classification |
title_short | RS-SSKD: Self-Supervision Equipped with Knowledge Distillation for Few-Shot Remote Sensing Scene Classification |
title_sort | rs-sskd: self-supervision equipped with knowledge distillation for few-shot remote sensing scene classification |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7956409/ https://www.ncbi.nlm.nih.gov/pubmed/33668138 http://dx.doi.org/10.3390/s21051566 |
work_keys_str_mv | AT zhangpei rssskdselfsupervisionequippedwithknowledgedistillationforfewshotremotesensingsceneclassification AT liying rssskdselfsupervisionequippedwithknowledgedistillationforfewshotremotesensingsceneclassification AT wangdong rssskdselfsupervisionequippedwithknowledgedistillationforfewshotremotesensingsceneclassification AT wangjiyue rssskdselfsupervisionequippedwithknowledgedistillationforfewshotremotesensingsceneclassification |