Cargando…

FDNet: An end-to-end fusion decomposition network for infrared and visible images

Infrared and visible image fusion can generate a fusion image with clear texture and prominent goals under extreme conditions. This capability is important for all-day climate detection and other tasks. However, most existing fusion methods for extracting features from infrared and visible images ar...

Descripción completa

Detalles Bibliográficos
Autores principales: Di, Jing, Ren, Li, Liu, Jizhao, Guo, Wenqing, Zhange, Huaikun, Liu, Qidong, Lian, Jing
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Public Library of Science 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10506725/
https://www.ncbi.nlm.nih.gov/pubmed/37721948
http://dx.doi.org/10.1371/journal.pone.0290231
_version_ 1785107164939943936
author Di, Jing
Ren, Li
Liu, Jizhao
Guo, Wenqing
Zhange, Huaikun
Liu, Qidong
Lian, Jing
author_facet Di, Jing
Ren, Li
Liu, Jizhao
Guo, Wenqing
Zhange, Huaikun
Liu, Qidong
Lian, Jing
author_sort Di, Jing
collection PubMed
description Infrared and visible image fusion can generate a fusion image with clear texture and prominent goals under extreme conditions. This capability is important for all-day climate detection and other tasks. However, most existing fusion methods for extracting features from infrared and visible images are based on convolutional neural networks (CNNs). These methods often fail to make full use of the salient objects and texture features in the raw image, leading to problems such as insufficient texture details and low contrast in the fused images. To this end, we propose an unsupervised end-to-end Fusion Decomposition Network (FDNet) for infrared and visible image fusion. Firstly, we construct a fusion network that extracts gradient and intensity information from raw images, using multi-scale layers, depthwise separable convolution, and improved convolution block attention module (I-CBAM). Secondly, as the FDNet network is based on the gradient and intensity information of the image for feature extraction, gradient and intensity loss are designed accordingly. Intensity loss adopts the improved Frobenius norm to adjust the weighing values between the fused image and the two raw to select more effective information. The gradient loss introduces an adaptive weight block that determines the optimized objective based on the richness of texture information at the pixel scale, ultimately guiding the fused image to generate more abundant texture information. Finally, we design a single and dual channel convolutional layer decomposition network, which keeps the decomposed image as possible with the input raw image, forcing the fused image to contain richer detail information. Compared with various other representative image fusion methods, our proposed method not only has good subjective vision, but also achieves advanced fusion performance in objective evaluation.
format Online
Article
Text
id pubmed-10506725
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher Public Library of Science
record_format MEDLINE/PubMed
spelling pubmed-105067252023-09-19 FDNet: An end-to-end fusion decomposition network for infrared and visible images Di, Jing Ren, Li Liu, Jizhao Guo, Wenqing Zhange, Huaikun Liu, Qidong Lian, Jing PLoS One Research Article Infrared and visible image fusion can generate a fusion image with clear texture and prominent goals under extreme conditions. This capability is important for all-day climate detection and other tasks. However, most existing fusion methods for extracting features from infrared and visible images are based on convolutional neural networks (CNNs). These methods often fail to make full use of the salient objects and texture features in the raw image, leading to problems such as insufficient texture details and low contrast in the fused images. To this end, we propose an unsupervised end-to-end Fusion Decomposition Network (FDNet) for infrared and visible image fusion. Firstly, we construct a fusion network that extracts gradient and intensity information from raw images, using multi-scale layers, depthwise separable convolution, and improved convolution block attention module (I-CBAM). Secondly, as the FDNet network is based on the gradient and intensity information of the image for feature extraction, gradient and intensity loss are designed accordingly. Intensity loss adopts the improved Frobenius norm to adjust the weighing values between the fused image and the two raw to select more effective information. The gradient loss introduces an adaptive weight block that determines the optimized objective based on the richness of texture information at the pixel scale, ultimately guiding the fused image to generate more abundant texture information. Finally, we design a single and dual channel convolutional layer decomposition network, which keeps the decomposed image as possible with the input raw image, forcing the fused image to contain richer detail information. Compared with various other representative image fusion methods, our proposed method not only has good subjective vision, but also achieves advanced fusion performance in objective evaluation. Public Library of Science 2023-09-18 /pmc/articles/PMC10506725/ /pubmed/37721948 http://dx.doi.org/10.1371/journal.pone.0290231 Text en © 2023 Di et al https://creativecommons.org/licenses/by/4.0/This is an open access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/) , which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
spellingShingle Research Article
Di, Jing
Ren, Li
Liu, Jizhao
Guo, Wenqing
Zhange, Huaikun
Liu, Qidong
Lian, Jing
FDNet: An end-to-end fusion decomposition network for infrared and visible images
title FDNet: An end-to-end fusion decomposition network for infrared and visible images
title_full FDNet: An end-to-end fusion decomposition network for infrared and visible images
title_fullStr FDNet: An end-to-end fusion decomposition network for infrared and visible images
title_full_unstemmed FDNet: An end-to-end fusion decomposition network for infrared and visible images
title_short FDNet: An end-to-end fusion decomposition network for infrared and visible images
title_sort fdnet: an end-to-end fusion decomposition network for infrared and visible images
topic Research Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10506725/
https://www.ncbi.nlm.nih.gov/pubmed/37721948
http://dx.doi.org/10.1371/journal.pone.0290231
work_keys_str_mv AT dijing fdnetanendtoendfusiondecompositionnetworkforinfraredandvisibleimages
AT renli fdnetanendtoendfusiondecompositionnetworkforinfraredandvisibleimages
AT liujizhao fdnetanendtoendfusiondecompositionnetworkforinfraredandvisibleimages
AT guowenqing fdnetanendtoendfusiondecompositionnetworkforinfraredandvisibleimages
AT zhangehuaikun fdnetanendtoendfusiondecompositionnetworkforinfraredandvisibleimages
AT liuqidong fdnetanendtoendfusiondecompositionnetworkforinfraredandvisibleimages
AT lianjing fdnetanendtoendfusiondecompositionnetworkforinfraredandvisibleimages