Cargando…
Dynamic Knowledge Distillation with Noise Elimination for RGB-D Salient Object Detection
RGB-D salient object detection (SOD) demonstrates its superiority in detecting in complex environments due to the additional depth information introduced in the data. Inevitably, an independent stream is introduced to extract features from depth images, leading to extra computation and parameters. T...
Autores principales: | , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
MDPI
2022
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9416116/ https://www.ncbi.nlm.nih.gov/pubmed/36015947 http://dx.doi.org/10.3390/s22166188 |
_version_ | 1784776401178591232 |
---|---|
author | Ren, Guangyu Yu, Yinxiao Liu, Hengyan Stathaki, Tania |
author_facet | Ren, Guangyu Yu, Yinxiao Liu, Hengyan Stathaki, Tania |
author_sort | Ren, Guangyu |
collection | PubMed |
description | RGB-D salient object detection (SOD) demonstrates its superiority in detecting in complex environments due to the additional depth information introduced in the data. Inevitably, an independent stream is introduced to extract features from depth images, leading to extra computation and parameters. This methodology sacrifices the model size to improve the detection accuracy which may impede the practical application of SOD problems. To tackle this dilemma, we propose a dynamic knowledge distillation (DKD) method, along with a lightweight structure, which significantly reduces the computational burden while maintaining validity. This method considers the factors of both teacher and student performance within the training stage and dynamically assigns the distillation weight instead of applying a fixed weight on the student model. We also investigate the issue of RGB-D early fusion strategy in distillation and propose a simple noise elimination method to mitigate the impact of distorted training data caused by low quality depth maps. Extensive experiments are conducted on five public datasets to demonstrate that our method can achieve competitive performance with a fast inference speed (136FPS) compared to 12 prior methods. |
format | Online Article Text |
id | pubmed-9416116 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2022 |
publisher | MDPI |
record_format | MEDLINE/PubMed |
spelling | pubmed-94161162022-08-27 Dynamic Knowledge Distillation with Noise Elimination for RGB-D Salient Object Detection Ren, Guangyu Yu, Yinxiao Liu, Hengyan Stathaki, Tania Sensors (Basel) Article RGB-D salient object detection (SOD) demonstrates its superiority in detecting in complex environments due to the additional depth information introduced in the data. Inevitably, an independent stream is introduced to extract features from depth images, leading to extra computation and parameters. This methodology sacrifices the model size to improve the detection accuracy which may impede the practical application of SOD problems. To tackle this dilemma, we propose a dynamic knowledge distillation (DKD) method, along with a lightweight structure, which significantly reduces the computational burden while maintaining validity. This method considers the factors of both teacher and student performance within the training stage and dynamically assigns the distillation weight instead of applying a fixed weight on the student model. We also investigate the issue of RGB-D early fusion strategy in distillation and propose a simple noise elimination method to mitigate the impact of distorted training data caused by low quality depth maps. Extensive experiments are conducted on five public datasets to demonstrate that our method can achieve competitive performance with a fast inference speed (136FPS) compared to 12 prior methods. MDPI 2022-08-18 /pmc/articles/PMC9416116/ /pubmed/36015947 http://dx.doi.org/10.3390/s22166188 Text en © 2022 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). |
spellingShingle | Article Ren, Guangyu Yu, Yinxiao Liu, Hengyan Stathaki, Tania Dynamic Knowledge Distillation with Noise Elimination for RGB-D Salient Object Detection |
title | Dynamic Knowledge Distillation with Noise Elimination for RGB-D Salient Object Detection |
title_full | Dynamic Knowledge Distillation with Noise Elimination for RGB-D Salient Object Detection |
title_fullStr | Dynamic Knowledge Distillation with Noise Elimination for RGB-D Salient Object Detection |
title_full_unstemmed | Dynamic Knowledge Distillation with Noise Elimination for RGB-D Salient Object Detection |
title_short | Dynamic Knowledge Distillation with Noise Elimination for RGB-D Salient Object Detection |
title_sort | dynamic knowledge distillation with noise elimination for rgb-d salient object detection |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9416116/ https://www.ncbi.nlm.nih.gov/pubmed/36015947 http://dx.doi.org/10.3390/s22166188 |
work_keys_str_mv | AT renguangyu dynamicknowledgedistillationwithnoiseeliminationforrgbdsalientobjectdetection AT yuyinxiao dynamicknowledgedistillationwithnoiseeliminationforrgbdsalientobjectdetection AT liuhengyan dynamicknowledgedistillationwithnoiseeliminationforrgbdsalientobjectdetection AT stathakitania dynamicknowledgedistillationwithnoiseeliminationforrgbdsalientobjectdetection |