Cargando…
MJ-GAN: Generative Adversarial Network with Multi-Grained Feature Extraction and Joint Attention Fusion for Infrared and Visible Image Fusion
The challenging issues in infrared and visible image fusion (IVIF) are extracting and fusing as much useful information as possible contained in the source images, namely, the rich textures in visible images and the significant contrast in infrared images. Existing fusion methods cannot address this...
Autores principales: | , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
MDPI
2023
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10385123/ https://www.ncbi.nlm.nih.gov/pubmed/37514617 http://dx.doi.org/10.3390/s23146322 |
_version_ | 1785081325918617600 |
---|---|
author | Yang, Danqing Wang, Xiaorui Zhu, Naibo Li, Shuang Hou, Na |
author_facet | Yang, Danqing Wang, Xiaorui Zhu, Naibo Li, Shuang Hou, Na |
author_sort | Yang, Danqing |
collection | PubMed |
description | The challenging issues in infrared and visible image fusion (IVIF) are extracting and fusing as much useful information as possible contained in the source images, namely, the rich textures in visible images and the significant contrast in infrared images. Existing fusion methods cannot address this problem well due to the handcrafted fusion operations and the extraction of features only from a single scale. In this work, we solve the problems of insufficient information extraction and fusion from another perspective to overcome the difficulties in lacking textures and unhighlighted targets in fused images. We propose a multi-scale feature extraction (MFE) and joint attention fusion (JAF) based end-to-end method using a generative adversarial network (MJ-GAN) framework for the aim of IVIF. The MFE modules are embedded in the two-stream structure-based generator in a densely connected manner to comprehensively extract multi-grained deep features from the source image pairs and reuse them during reconstruction. Moreover, an improved self-attention structure is introduced into the MFEs to enhance the pertinence among multi-grained features. The merging procedure for salient and important features is conducted via the JAF network in a feature recalibration manner, which also produces the fused image in a reasonable manner. Eventually, we can reconstruct a primary fused image with the major infrared radiometric information and a small amount of visible texture information via a single decoder network. The dual discriminator with strong discriminative power can add more texture and contrast information to the final fused image. Extensive experiments on four publicly available datasets show that the proposed method ultimately achieves phenomenal performance in both visual quality and quantitative assessment compared with nine leading algorithms. |
format | Online Article Text |
id | pubmed-10385123 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2023 |
publisher | MDPI |
record_format | MEDLINE/PubMed |
spelling | pubmed-103851232023-07-30 MJ-GAN: Generative Adversarial Network with Multi-Grained Feature Extraction and Joint Attention Fusion for Infrared and Visible Image Fusion Yang, Danqing Wang, Xiaorui Zhu, Naibo Li, Shuang Hou, Na Sensors (Basel) Article The challenging issues in infrared and visible image fusion (IVIF) are extracting and fusing as much useful information as possible contained in the source images, namely, the rich textures in visible images and the significant contrast in infrared images. Existing fusion methods cannot address this problem well due to the handcrafted fusion operations and the extraction of features only from a single scale. In this work, we solve the problems of insufficient information extraction and fusion from another perspective to overcome the difficulties in lacking textures and unhighlighted targets in fused images. We propose a multi-scale feature extraction (MFE) and joint attention fusion (JAF) based end-to-end method using a generative adversarial network (MJ-GAN) framework for the aim of IVIF. The MFE modules are embedded in the two-stream structure-based generator in a densely connected manner to comprehensively extract multi-grained deep features from the source image pairs and reuse them during reconstruction. Moreover, an improved self-attention structure is introduced into the MFEs to enhance the pertinence among multi-grained features. The merging procedure for salient and important features is conducted via the JAF network in a feature recalibration manner, which also produces the fused image in a reasonable manner. Eventually, we can reconstruct a primary fused image with the major infrared radiometric information and a small amount of visible texture information via a single decoder network. The dual discriminator with strong discriminative power can add more texture and contrast information to the final fused image. Extensive experiments on four publicly available datasets show that the proposed method ultimately achieves phenomenal performance in both visual quality and quantitative assessment compared with nine leading algorithms. MDPI 2023-07-12 /pmc/articles/PMC10385123/ /pubmed/37514617 http://dx.doi.org/10.3390/s23146322 Text en © 2023 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). |
spellingShingle | Article Yang, Danqing Wang, Xiaorui Zhu, Naibo Li, Shuang Hou, Na MJ-GAN: Generative Adversarial Network with Multi-Grained Feature Extraction and Joint Attention Fusion for Infrared and Visible Image Fusion |
title | MJ-GAN: Generative Adversarial Network with Multi-Grained Feature Extraction and Joint Attention Fusion for Infrared and Visible Image Fusion |
title_full | MJ-GAN: Generative Adversarial Network with Multi-Grained Feature Extraction and Joint Attention Fusion for Infrared and Visible Image Fusion |
title_fullStr | MJ-GAN: Generative Adversarial Network with Multi-Grained Feature Extraction and Joint Attention Fusion for Infrared and Visible Image Fusion |
title_full_unstemmed | MJ-GAN: Generative Adversarial Network with Multi-Grained Feature Extraction and Joint Attention Fusion for Infrared and Visible Image Fusion |
title_short | MJ-GAN: Generative Adversarial Network with Multi-Grained Feature Extraction and Joint Attention Fusion for Infrared and Visible Image Fusion |
title_sort | mj-gan: generative adversarial network with multi-grained feature extraction and joint attention fusion for infrared and visible image fusion |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10385123/ https://www.ncbi.nlm.nih.gov/pubmed/37514617 http://dx.doi.org/10.3390/s23146322 |
work_keys_str_mv | AT yangdanqing mjgangenerativeadversarialnetworkwithmultigrainedfeatureextractionandjointattentionfusionforinfraredandvisibleimagefusion AT wangxiaorui mjgangenerativeadversarialnetworkwithmultigrainedfeatureextractionandjointattentionfusionforinfraredandvisibleimagefusion AT zhunaibo mjgangenerativeadversarialnetworkwithmultigrainedfeatureextractionandjointattentionfusionforinfraredandvisibleimagefusion AT lishuang mjgangenerativeadversarialnetworkwithmultigrainedfeatureextractionandjointattentionfusionforinfraredandvisibleimagefusion AT houna mjgangenerativeadversarialnetworkwithmultigrainedfeatureextractionandjointattentionfusionforinfraredandvisibleimagefusion |