Cargando…

Infrared and Visible Image Fusion through Details Preservation

In many actual applications, fused image is essential to contain high-quality details for achieving a comprehensive representation of the real scene. However, existing image fusion methods suffer from loss of details because of the error accumulations of sequential tasks. This paper proposes a novel...

Descripción completa

Detalles Bibliográficos
Autores principales: Liu, Yaochen, Dong, Lili, Ji, Yuanyuan, Xu, Wenhai
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2019
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6832652/
https://www.ncbi.nlm.nih.gov/pubmed/31635137
http://dx.doi.org/10.3390/s19204556
_version_ 1783466223630024704
author Liu, Yaochen
Dong, Lili
Ji, Yuanyuan
Xu, Wenhai
author_facet Liu, Yaochen
Dong, Lili
Ji, Yuanyuan
Xu, Wenhai
author_sort Liu, Yaochen
collection PubMed
description In many actual applications, fused image is essential to contain high-quality details for achieving a comprehensive representation of the real scene. However, existing image fusion methods suffer from loss of details because of the error accumulations of sequential tasks. This paper proposes a novel fusion method to preserve details of infrared and visible images by combining new decomposition, feature extraction, and fusion scheme. For decomposition, different from the most decomposition methods by guided filter, the guidance image contains only the strong edge of the source image but no other interference information so that rich tiny details can be decomposed into the detailed part. Then, according to the different characteristics of infrared and visible detail parts, a rough convolutional neural network (CNN) and a sophisticated CNN are designed so that various features can be fully extracted. To integrate the extracted features, we also present a multi-layer features fusion strategy through discrete cosine transform (DCT), which not only highlights significant features but also enhances details. Moreover, the base parts are fused by weighting method. Finally, the fused image is obtained by adding the fused detail and base part. Different from the general image fusion methods, our method not only retains the target region of source image but also enhances background in the fused image. In addition, compared with state-of-the-art fusion methods, our proposed fusion method has many advantages, including (i) better visual quality of fused-image subjective evaluation, and (ii) better objective assessment for those images.
format Online
Article
Text
id pubmed-6832652
institution National Center for Biotechnology Information
language English
publishDate 2019
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-68326522019-11-25 Infrared and Visible Image Fusion through Details Preservation Liu, Yaochen Dong, Lili Ji, Yuanyuan Xu, Wenhai Sensors (Basel) Article In many actual applications, fused image is essential to contain high-quality details for achieving a comprehensive representation of the real scene. However, existing image fusion methods suffer from loss of details because of the error accumulations of sequential tasks. This paper proposes a novel fusion method to preserve details of infrared and visible images by combining new decomposition, feature extraction, and fusion scheme. For decomposition, different from the most decomposition methods by guided filter, the guidance image contains only the strong edge of the source image but no other interference information so that rich tiny details can be decomposed into the detailed part. Then, according to the different characteristics of infrared and visible detail parts, a rough convolutional neural network (CNN) and a sophisticated CNN are designed so that various features can be fully extracted. To integrate the extracted features, we also present a multi-layer features fusion strategy through discrete cosine transform (DCT), which not only highlights significant features but also enhances details. Moreover, the base parts are fused by weighting method. Finally, the fused image is obtained by adding the fused detail and base part. Different from the general image fusion methods, our method not only retains the target region of source image but also enhances background in the fused image. In addition, compared with state-of-the-art fusion methods, our proposed fusion method has many advantages, including (i) better visual quality of fused-image subjective evaluation, and (ii) better objective assessment for those images. MDPI 2019-10-20 /pmc/articles/PMC6832652/ /pubmed/31635137 http://dx.doi.org/10.3390/s19204556 Text en © 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Liu, Yaochen
Dong, Lili
Ji, Yuanyuan
Xu, Wenhai
Infrared and Visible Image Fusion through Details Preservation
title Infrared and Visible Image Fusion through Details Preservation
title_full Infrared and Visible Image Fusion through Details Preservation
title_fullStr Infrared and Visible Image Fusion through Details Preservation
title_full_unstemmed Infrared and Visible Image Fusion through Details Preservation
title_short Infrared and Visible Image Fusion through Details Preservation
title_sort infrared and visible image fusion through details preservation
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6832652/
https://www.ncbi.nlm.nih.gov/pubmed/31635137
http://dx.doi.org/10.3390/s19204556
work_keys_str_mv AT liuyaochen infraredandvisibleimagefusionthroughdetailspreservation
AT donglili infraredandvisibleimagefusionthroughdetailspreservation
AT jiyuanyuan infraredandvisibleimagefusionthroughdetailspreservation
AT xuwenhai infraredandvisibleimagefusionthroughdetailspreservation