Cargando…

FCKDNet: A Feature Condensation Knowledge Distillation Network for Semantic Segmentation

As a popular research subject in the field of computer vision, knowledge distillation (KD) is widely used in semantic segmentation (SS). However, based on the learning paradigm of the teacher–student model, the poor quality of teacher network feature knowledge still hinders the development of KD tec...

Descripción completa

Detalles Bibliográficos
Autores principales: Yuan, Wenhao, Lu, Xiaoyan, Zhang, Rongfen, Liu, Yuhong
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9858574/
https://www.ncbi.nlm.nih.gov/pubmed/36673266
http://dx.doi.org/10.3390/e25010125
_version_ 1784874135724228608
author Yuan, Wenhao
Lu, Xiaoyan
Zhang, Rongfen
Liu, Yuhong
author_facet Yuan, Wenhao
Lu, Xiaoyan
Zhang, Rongfen
Liu, Yuhong
author_sort Yuan, Wenhao
collection PubMed
description As a popular research subject in the field of computer vision, knowledge distillation (KD) is widely used in semantic segmentation (SS). However, based on the learning paradigm of the teacher–student model, the poor quality of teacher network feature knowledge still hinders the development of KD technology. In this paper, we investigate the output features of the teacher–student network and propose a feature condensation-based KD network (FCKDNet), which reduces pseudo-knowledge transfer in the teacher–student network. First, combined with the pixel information entropy calculation rule, we design a feature condensation method to separate the foreground feature knowledge from the background noise of the teacher network outputs. Then, the obtained feature condensation matrix is applied to the original outputs of the teacher and student networks to improve the feature representation capability. In addition, after performing feature condensation on the teacher network, we propose a soft enhancement method of features based on spatial and channel dimensions to improve the dependency of pixels in the feature maps. Finally, we divide the outputs of the teacher network into spatial condensation features and channel condensation features and perform distillation loss calculation with the student network separately to assist the student network to converge faster. Extensive experiments on the public datasets Pascal VOC and Cityscapes demonstrate that our proposed method improves the baseline by 3.16% and 2.98% in terms of mAcc, and 2.03% and 2.30% in terms of mIoU, respectively, and has better segmentation performance and robustness than the mainstream methods.
format Online
Article
Text
id pubmed-9858574
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-98585742023-01-21 FCKDNet: A Feature Condensation Knowledge Distillation Network for Semantic Segmentation Yuan, Wenhao Lu, Xiaoyan Zhang, Rongfen Liu, Yuhong Entropy (Basel) Article As a popular research subject in the field of computer vision, knowledge distillation (KD) is widely used in semantic segmentation (SS). However, based on the learning paradigm of the teacher–student model, the poor quality of teacher network feature knowledge still hinders the development of KD technology. In this paper, we investigate the output features of the teacher–student network and propose a feature condensation-based KD network (FCKDNet), which reduces pseudo-knowledge transfer in the teacher–student network. First, combined with the pixel information entropy calculation rule, we design a feature condensation method to separate the foreground feature knowledge from the background noise of the teacher network outputs. Then, the obtained feature condensation matrix is applied to the original outputs of the teacher and student networks to improve the feature representation capability. In addition, after performing feature condensation on the teacher network, we propose a soft enhancement method of features based on spatial and channel dimensions to improve the dependency of pixels in the feature maps. Finally, we divide the outputs of the teacher network into spatial condensation features and channel condensation features and perform distillation loss calculation with the student network separately to assist the student network to converge faster. Extensive experiments on the public datasets Pascal VOC and Cityscapes demonstrate that our proposed method improves the baseline by 3.16% and 2.98% in terms of mAcc, and 2.03% and 2.30% in terms of mIoU, respectively, and has better segmentation performance and robustness than the mainstream methods. MDPI 2023-01-07 /pmc/articles/PMC9858574/ /pubmed/36673266 http://dx.doi.org/10.3390/e25010125 Text en © 2023 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Yuan, Wenhao
Lu, Xiaoyan
Zhang, Rongfen
Liu, Yuhong
FCKDNet: A Feature Condensation Knowledge Distillation Network for Semantic Segmentation
title FCKDNet: A Feature Condensation Knowledge Distillation Network for Semantic Segmentation
title_full FCKDNet: A Feature Condensation Knowledge Distillation Network for Semantic Segmentation
title_fullStr FCKDNet: A Feature Condensation Knowledge Distillation Network for Semantic Segmentation
title_full_unstemmed FCKDNet: A Feature Condensation Knowledge Distillation Network for Semantic Segmentation
title_short FCKDNet: A Feature Condensation Knowledge Distillation Network for Semantic Segmentation
title_sort fckdnet: a feature condensation knowledge distillation network for semantic segmentation
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9858574/
https://www.ncbi.nlm.nih.gov/pubmed/36673266
http://dx.doi.org/10.3390/e25010125
work_keys_str_mv AT yuanwenhao fckdnetafeaturecondensationknowledgedistillationnetworkforsemanticsegmentation
AT luxiaoyan fckdnetafeaturecondensationknowledgedistillationnetworkforsemanticsegmentation
AT zhangrongfen fckdnetafeaturecondensationknowledgedistillationnetworkforsemanticsegmentation
AT liuyuhong fckdnetafeaturecondensationknowledgedistillationnetworkforsemanticsegmentation