Cargando…
SCFusion: Infrared and Visible Fusion Based on Salient Compensation
The aim of infrared and visible image fusion is to integrate the complementary information of the two modalities for high-quality fused images. However, many deep learning fusion algorithms have not considered the characteristics of infrared images in low-light scenes, leading to the problems of wea...
Autores principales: | , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
MDPI
2023
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10378341/ https://www.ncbi.nlm.nih.gov/pubmed/37509931 http://dx.doi.org/10.3390/e25070985 |
_version_ | 1785079741247651840 |
---|---|
author | Liu, Haipeng Ma, Meiyan Wang, Meng Chen, Zhaoyu Zhao, Yibo |
author_facet | Liu, Haipeng Ma, Meiyan Wang, Meng Chen, Zhaoyu Zhao, Yibo |
author_sort | Liu, Haipeng |
collection | PubMed |
description | The aim of infrared and visible image fusion is to integrate the complementary information of the two modalities for high-quality fused images. However, many deep learning fusion algorithms have not considered the characteristics of infrared images in low-light scenes, leading to the problems of weak texture details, low contrast of infrared targets and poor visual perception in the existing methods. Therefore, in this paper, we propose a salient compensation-based fusion method that makes sufficient use of the characteristics of infrared and visible images to generate high-quality fused images under low-light conditions. First, we design a multi-scale edge gradient module (MEGB) in the texture mainstream to adequately extract the texture information of the dual input of infrared and visible images; on the other hand, the salient tributary is pre-trained by salient loss to obtain the saliency map based on the salient dense residual module (SRDB) to extract salient features, which is supplemented in the process of overall network training. We propose the spatial bias module (SBM) to fuse global information with local information. Finally, extensive comparison experiments with existing methods show that our method has significant advantages in describing target features and global scenes, the effectiveness of the proposed module is demonstrated by ablation experiments. In addition, we also verify the facilitation of this paper’s method for high-level vision on a semantic segmentation task. |
format | Online Article Text |
id | pubmed-10378341 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2023 |
publisher | MDPI |
record_format | MEDLINE/PubMed |
spelling | pubmed-103783412023-07-29 SCFusion: Infrared and Visible Fusion Based on Salient Compensation Liu, Haipeng Ma, Meiyan Wang, Meng Chen, Zhaoyu Zhao, Yibo Entropy (Basel) Article The aim of infrared and visible image fusion is to integrate the complementary information of the two modalities for high-quality fused images. However, many deep learning fusion algorithms have not considered the characteristics of infrared images in low-light scenes, leading to the problems of weak texture details, low contrast of infrared targets and poor visual perception in the existing methods. Therefore, in this paper, we propose a salient compensation-based fusion method that makes sufficient use of the characteristics of infrared and visible images to generate high-quality fused images under low-light conditions. First, we design a multi-scale edge gradient module (MEGB) in the texture mainstream to adequately extract the texture information of the dual input of infrared and visible images; on the other hand, the salient tributary is pre-trained by salient loss to obtain the saliency map based on the salient dense residual module (SRDB) to extract salient features, which is supplemented in the process of overall network training. We propose the spatial bias module (SBM) to fuse global information with local information. Finally, extensive comparison experiments with existing methods show that our method has significant advantages in describing target features and global scenes, the effectiveness of the proposed module is demonstrated by ablation experiments. In addition, we also verify the facilitation of this paper’s method for high-level vision on a semantic segmentation task. MDPI 2023-06-27 /pmc/articles/PMC10378341/ /pubmed/37509931 http://dx.doi.org/10.3390/e25070985 Text en © 2023 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). |
spellingShingle | Article Liu, Haipeng Ma, Meiyan Wang, Meng Chen, Zhaoyu Zhao, Yibo SCFusion: Infrared and Visible Fusion Based on Salient Compensation |
title | SCFusion: Infrared and Visible Fusion Based on Salient Compensation |
title_full | SCFusion: Infrared and Visible Fusion Based on Salient Compensation |
title_fullStr | SCFusion: Infrared and Visible Fusion Based on Salient Compensation |
title_full_unstemmed | SCFusion: Infrared and Visible Fusion Based on Salient Compensation |
title_short | SCFusion: Infrared and Visible Fusion Based on Salient Compensation |
title_sort | scfusion: infrared and visible fusion based on salient compensation |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10378341/ https://www.ncbi.nlm.nih.gov/pubmed/37509931 http://dx.doi.org/10.3390/e25070985 |
work_keys_str_mv | AT liuhaipeng scfusioninfraredandvisiblefusionbasedonsalientcompensation AT mameiyan scfusioninfraredandvisiblefusionbasedonsalientcompensation AT wangmeng scfusioninfraredandvisiblefusionbasedonsalientcompensation AT chenzhaoyu scfusioninfraredandvisiblefusionbasedonsalientcompensation AT zhaoyibo scfusioninfraredandvisiblefusionbasedonsalientcompensation |