Cargando…

TPFusion: Texture Preserving Fusion of Infrared and Visible Images via Dense Networks

In this paper, we design an infrared (IR) and visible (VIS) image fusion via unsupervised dense networks, termed as TPFusion. Activity level measurements and fusion rules are indispensable parts of conventional image fusion methods. However, designing an appropriate fusion process is time-consuming...

Descripción completa

Detalles Bibliográficos
Autores principales: Yang, Zhiguang, Zeng, Shan
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8870949/
https://www.ncbi.nlm.nih.gov/pubmed/35205588
http://dx.doi.org/10.3390/e24020294
_version_ 1784656879392129024
author Yang, Zhiguang
Zeng, Shan
author_facet Yang, Zhiguang
Zeng, Shan
author_sort Yang, Zhiguang
collection PubMed
description In this paper, we design an infrared (IR) and visible (VIS) image fusion via unsupervised dense networks, termed as TPFusion. Activity level measurements and fusion rules are indispensable parts of conventional image fusion methods. However, designing an appropriate fusion process is time-consuming and complicated. In recent years, deep learning-based methods are proposed to handle this problem. However, for multi-modality image fusion, using the same network cannot extract effective feature maps from source images that are obtained by different image sensors. In TPFusion, we can avoid this issue. At first, we extract the textural information of the source images. Then two densely connected networks are trained to fuse textural information and source image, respectively. By this way, we can preserve more textural details in the fused image. Moreover, loss functions we designed to constrain two densely connected convolutional networks are according to the characteristics of textural information and source images. Through our method, the fused image will obtain more textural information of source images. For proving the validity of our method, we implement comparison and ablation experiments from the qualitative and quantitative assessments. The ablation experiments prove the effectiveness of TPFusion. Being compared to existing advanced IR and VIS image fusion methods, our fusion results possess better fusion results in both objective and subjective aspects. To be specific, in qualitative comparisons, our fusion results have better contrast ratio and abundant textural details. In quantitative comparisons, TPFusion outperforms existing representative fusion methods.
format Online
Article
Text
id pubmed-8870949
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-88709492022-02-25 TPFusion: Texture Preserving Fusion of Infrared and Visible Images via Dense Networks Yang, Zhiguang Zeng, Shan Entropy (Basel) Article In this paper, we design an infrared (IR) and visible (VIS) image fusion via unsupervised dense networks, termed as TPFusion. Activity level measurements and fusion rules are indispensable parts of conventional image fusion methods. However, designing an appropriate fusion process is time-consuming and complicated. In recent years, deep learning-based methods are proposed to handle this problem. However, for multi-modality image fusion, using the same network cannot extract effective feature maps from source images that are obtained by different image sensors. In TPFusion, we can avoid this issue. At first, we extract the textural information of the source images. Then two densely connected networks are trained to fuse textural information and source image, respectively. By this way, we can preserve more textural details in the fused image. Moreover, loss functions we designed to constrain two densely connected convolutional networks are according to the characteristics of textural information and source images. Through our method, the fused image will obtain more textural information of source images. For proving the validity of our method, we implement comparison and ablation experiments from the qualitative and quantitative assessments. The ablation experiments prove the effectiveness of TPFusion. Being compared to existing advanced IR and VIS image fusion methods, our fusion results possess better fusion results in both objective and subjective aspects. To be specific, in qualitative comparisons, our fusion results have better contrast ratio and abundant textural details. In quantitative comparisons, TPFusion outperforms existing representative fusion methods. MDPI 2022-02-19 /pmc/articles/PMC8870949/ /pubmed/35205588 http://dx.doi.org/10.3390/e24020294 Text en © 2022 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Yang, Zhiguang
Zeng, Shan
TPFusion: Texture Preserving Fusion of Infrared and Visible Images via Dense Networks
title TPFusion: Texture Preserving Fusion of Infrared and Visible Images via Dense Networks
title_full TPFusion: Texture Preserving Fusion of Infrared and Visible Images via Dense Networks
title_fullStr TPFusion: Texture Preserving Fusion of Infrared and Visible Images via Dense Networks
title_full_unstemmed TPFusion: Texture Preserving Fusion of Infrared and Visible Images via Dense Networks
title_short TPFusion: Texture Preserving Fusion of Infrared and Visible Images via Dense Networks
title_sort tpfusion: texture preserving fusion of infrared and visible images via dense networks
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8870949/
https://www.ncbi.nlm.nih.gov/pubmed/35205588
http://dx.doi.org/10.3390/e24020294
work_keys_str_mv AT yangzhiguang tpfusiontexturepreservingfusionofinfraredandvisibleimagesviadensenetworks
AT zengshan tpfusiontexturepreservingfusionofinfraredandvisibleimagesviadensenetworks