Cargando…

Infrared and Visible Image Fusion Method Using Salience Detection and Convolutional Neural Network

This paper presents an algorithm for infrared and visible image fusion using significance detection and Convolutional Neural Networks with the aim of integrating discriminatory features and improving the overall quality of visual perception. Firstly, a global contrast-based significance detection al...

Descripción completa

Detalles Bibliográficos
Autores principales: Wang, Zetian, Wang, Fei, Wu, Dan, Gao, Guowang
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9319094/
https://www.ncbi.nlm.nih.gov/pubmed/35891107
http://dx.doi.org/10.3390/s22145430
_version_ 1784755465555542016
author Wang, Zetian
Wang, Fei
Wu, Dan
Gao, Guowang
author_facet Wang, Zetian
Wang, Fei
Wu, Dan
Gao, Guowang
author_sort Wang, Zetian
collection PubMed
description This paper presents an algorithm for infrared and visible image fusion using significance detection and Convolutional Neural Networks with the aim of integrating discriminatory features and improving the overall quality of visual perception. Firstly, a global contrast-based significance detection algorithm is applied to the infrared image, so that salient features can be extracted, highlighting high brightness values and suppressing low brightness values and image noise. Secondly, a special loss function is designed for infrared images to guide the extraction and reconstruction of features in the network, based on the principle of salience detection, while the more mainstream gradient loss is used as the loss function for visible images in the network. Afterwards, a modified residual network is applied to complete the extraction of features and image reconstruction. Extensive qualitative and quantitative experiments have shown that fused images are sharper and contain more information about the scene, and the fused results look more like high-quality visible images. The generalization experiments also demonstrate that the proposed model has the ability to generalize well, independent of the limitations of the sensor. Overall, the algorithm proposed in this paper performs better compared to other state-of-the-art methods.
format Online
Article
Text
id pubmed-9319094
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-93190942022-07-27 Infrared and Visible Image Fusion Method Using Salience Detection and Convolutional Neural Network Wang, Zetian Wang, Fei Wu, Dan Gao, Guowang Sensors (Basel) Article This paper presents an algorithm for infrared and visible image fusion using significance detection and Convolutional Neural Networks with the aim of integrating discriminatory features and improving the overall quality of visual perception. Firstly, a global contrast-based significance detection algorithm is applied to the infrared image, so that salient features can be extracted, highlighting high brightness values and suppressing low brightness values and image noise. Secondly, a special loss function is designed for infrared images to guide the extraction and reconstruction of features in the network, based on the principle of salience detection, while the more mainstream gradient loss is used as the loss function for visible images in the network. Afterwards, a modified residual network is applied to complete the extraction of features and image reconstruction. Extensive qualitative and quantitative experiments have shown that fused images are sharper and contain more information about the scene, and the fused results look more like high-quality visible images. The generalization experiments also demonstrate that the proposed model has the ability to generalize well, independent of the limitations of the sensor. Overall, the algorithm proposed in this paper performs better compared to other state-of-the-art methods. MDPI 2022-07-20 /pmc/articles/PMC9319094/ /pubmed/35891107 http://dx.doi.org/10.3390/s22145430 Text en © 2022 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Wang, Zetian
Wang, Fei
Wu, Dan
Gao, Guowang
Infrared and Visible Image Fusion Method Using Salience Detection and Convolutional Neural Network
title Infrared and Visible Image Fusion Method Using Salience Detection and Convolutional Neural Network
title_full Infrared and Visible Image Fusion Method Using Salience Detection and Convolutional Neural Network
title_fullStr Infrared and Visible Image Fusion Method Using Salience Detection and Convolutional Neural Network
title_full_unstemmed Infrared and Visible Image Fusion Method Using Salience Detection and Convolutional Neural Network
title_short Infrared and Visible Image Fusion Method Using Salience Detection and Convolutional Neural Network
title_sort infrared and visible image fusion method using salience detection and convolutional neural network
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9319094/
https://www.ncbi.nlm.nih.gov/pubmed/35891107
http://dx.doi.org/10.3390/s22145430
work_keys_str_mv AT wangzetian infraredandvisibleimagefusionmethodusingsaliencedetectionandconvolutionalneuralnetwork
AT wangfei infraredandvisibleimagefusionmethodusingsaliencedetectionandconvolutionalneuralnetwork
AT wudan infraredandvisibleimagefusionmethodusingsaliencedetectionandconvolutionalneuralnetwork
AT gaoguowang infraredandvisibleimagefusionmethodusingsaliencedetectionandconvolutionalneuralnetwork