Cargando…

ECFuse: Edge-Consistent and Correlation-Driven Fusion Framework for Infrared and Visible Image Fusion

Infrared and visible image fusion (IVIF) aims to render fused images that maintain the merits of both modalities. To tackle the challenge in fusing cross-modality information and avoiding texture loss in IVIF, we propose a novel edge-consistent and correlation-driven fusion framework (ECFuse). This...

Descripción completa

Detalles Bibliográficos
Autores principales: Chen, Hanrui, Deng, Lei, Zhu, Lianqing, Dong, Mingli
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10574844/
https://www.ncbi.nlm.nih.gov/pubmed/37836900
http://dx.doi.org/10.3390/s23198071
_version_ 1785120784037969920
author Chen, Hanrui
Deng, Lei
Zhu, Lianqing
Dong, Mingli
author_facet Chen, Hanrui
Deng, Lei
Zhu, Lianqing
Dong, Mingli
author_sort Chen, Hanrui
collection PubMed
description Infrared and visible image fusion (IVIF) aims to render fused images that maintain the merits of both modalities. To tackle the challenge in fusing cross-modality information and avoiding texture loss in IVIF, we propose a novel edge-consistent and correlation-driven fusion framework (ECFuse). This framework leverages our proposed edge-consistency fusion module to maintain rich and coherent edges and textures, simultaneously introducing a correlation-driven deep learning network to fuse the cross-modality global features and modality-specific local features. Firstly, the framework employs a multi-scale transformation (MST) to decompose the source images into base and detail layers. Then, the edge-consistent fusion module fuses detail layers while maintaining the coherence of edges through consistency verification. A correlation-driven fusion network is proposed to fuse the base layers containing both modalities’ main features in the transformation domain. Finally, the final fused spatial image is reconstructed by inverse MST. We conducted experiments to compare our ECFuse with both conventional and deep leaning approaches on TNO, LLVIP and [Formula: see text] datasets. The qualitative and quantitative evaluation results demonstrate the effectiveness of our framework. We also show that ECFuse can boost the performance in downstream infrared–visible object detection in a unified benchmark.
format Online
Article
Text
id pubmed-10574844
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-105748442023-10-14 ECFuse: Edge-Consistent and Correlation-Driven Fusion Framework for Infrared and Visible Image Fusion Chen, Hanrui Deng, Lei Zhu, Lianqing Dong, Mingli Sensors (Basel) Article Infrared and visible image fusion (IVIF) aims to render fused images that maintain the merits of both modalities. To tackle the challenge in fusing cross-modality information and avoiding texture loss in IVIF, we propose a novel edge-consistent and correlation-driven fusion framework (ECFuse). This framework leverages our proposed edge-consistency fusion module to maintain rich and coherent edges and textures, simultaneously introducing a correlation-driven deep learning network to fuse the cross-modality global features and modality-specific local features. Firstly, the framework employs a multi-scale transformation (MST) to decompose the source images into base and detail layers. Then, the edge-consistent fusion module fuses detail layers while maintaining the coherence of edges through consistency verification. A correlation-driven fusion network is proposed to fuse the base layers containing both modalities’ main features in the transformation domain. Finally, the final fused spatial image is reconstructed by inverse MST. We conducted experiments to compare our ECFuse with both conventional and deep leaning approaches on TNO, LLVIP and [Formula: see text] datasets. The qualitative and quantitative evaluation results demonstrate the effectiveness of our framework. We also show that ECFuse can boost the performance in downstream infrared–visible object detection in a unified benchmark. MDPI 2023-09-25 /pmc/articles/PMC10574844/ /pubmed/37836900 http://dx.doi.org/10.3390/s23198071 Text en © 2023 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Chen, Hanrui
Deng, Lei
Zhu, Lianqing
Dong, Mingli
ECFuse: Edge-Consistent and Correlation-Driven Fusion Framework for Infrared and Visible Image Fusion
title ECFuse: Edge-Consistent and Correlation-Driven Fusion Framework for Infrared and Visible Image Fusion
title_full ECFuse: Edge-Consistent and Correlation-Driven Fusion Framework for Infrared and Visible Image Fusion
title_fullStr ECFuse: Edge-Consistent and Correlation-Driven Fusion Framework for Infrared and Visible Image Fusion
title_full_unstemmed ECFuse: Edge-Consistent and Correlation-Driven Fusion Framework for Infrared and Visible Image Fusion
title_short ECFuse: Edge-Consistent and Correlation-Driven Fusion Framework for Infrared and Visible Image Fusion
title_sort ecfuse: edge-consistent and correlation-driven fusion framework for infrared and visible image fusion
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10574844/
https://www.ncbi.nlm.nih.gov/pubmed/37836900
http://dx.doi.org/10.3390/s23198071
work_keys_str_mv AT chenhanrui ecfuseedgeconsistentandcorrelationdrivenfusionframeworkforinfraredandvisibleimagefusion
AT denglei ecfuseedgeconsistentandcorrelationdrivenfusionframeworkforinfraredandvisibleimagefusion
AT zhulianqing ecfuseedgeconsistentandcorrelationdrivenfusionframeworkforinfraredandvisibleimagefusion
AT dongmingli ecfuseedgeconsistentandcorrelationdrivenfusionframeworkforinfraredandvisibleimagefusion