Cargando…
A Multi-Stage Visible and Infrared Image Fusion Network Based on Attention Mechanism
Pixel-level image fusion is an effective way to fully exploit the rich texture information of visible images and the salient target characteristics of infrared images. With the development of deep learning technology in recent years, the image fusion algorithm based on this method has also achieved...
Autores principales: | , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
MDPI
2022
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9144774/ https://www.ncbi.nlm.nih.gov/pubmed/35632059 http://dx.doi.org/10.3390/s22103651 |
_version_ | 1784716130189836288 |
---|---|
author | Zheng, Xin Yang, Qiyong Si, Pengbo Wu, Qiang |
author_facet | Zheng, Xin Yang, Qiyong Si, Pengbo Wu, Qiang |
author_sort | Zheng, Xin |
collection | PubMed |
description | Pixel-level image fusion is an effective way to fully exploit the rich texture information of visible images and the salient target characteristics of infrared images. With the development of deep learning technology in recent years, the image fusion algorithm based on this method has also achieved great success. However, owing to the lack of sufficient and reliable paired data and a nonexistent ideal fusion result as supervision, it is difficult to design a precise network training mode. Moreover, the manual fusion strategy has difficulty ensuring the full use of information, which easily causes redundancy and omittance. To solve the above problems, this paper proposes a multi-stage visible and infrared image fusion network based on an attention mechanism (MSFAM). Our method stabilizes the training process through multi-stage training and enhances features by the learning attention fusion block. To improve the network effect, we further design a Semantic Constraint module and Push–Pull loss function for the fusion task. Compared with several recently used methods, the qualitative comparison intuitively shows more beautiful and natural fusion results by our model with a stronger applicability. For quantitative experiments, MSFAM achieves the best results in three of the six frequently used metrics in fusion tasks, while other methods only obtain good scores on a single metric or a few metrics. Besides, a commonly used high-level semantic task, i.e., object detection, is used to prove its greater benefits for downstream tasks compared with single-light images and fusion results by existing methods. All these experiments prove the superiority and effectiveness of our algorithm. |
format | Online Article Text |
id | pubmed-9144774 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2022 |
publisher | MDPI |
record_format | MEDLINE/PubMed |
spelling | pubmed-91447742022-05-29 A Multi-Stage Visible and Infrared Image Fusion Network Based on Attention Mechanism Zheng, Xin Yang, Qiyong Si, Pengbo Wu, Qiang Sensors (Basel) Article Pixel-level image fusion is an effective way to fully exploit the rich texture information of visible images and the salient target characteristics of infrared images. With the development of deep learning technology in recent years, the image fusion algorithm based on this method has also achieved great success. However, owing to the lack of sufficient and reliable paired data and a nonexistent ideal fusion result as supervision, it is difficult to design a precise network training mode. Moreover, the manual fusion strategy has difficulty ensuring the full use of information, which easily causes redundancy and omittance. To solve the above problems, this paper proposes a multi-stage visible and infrared image fusion network based on an attention mechanism (MSFAM). Our method stabilizes the training process through multi-stage training and enhances features by the learning attention fusion block. To improve the network effect, we further design a Semantic Constraint module and Push–Pull loss function for the fusion task. Compared with several recently used methods, the qualitative comparison intuitively shows more beautiful and natural fusion results by our model with a stronger applicability. For quantitative experiments, MSFAM achieves the best results in three of the six frequently used metrics in fusion tasks, while other methods only obtain good scores on a single metric or a few metrics. Besides, a commonly used high-level semantic task, i.e., object detection, is used to prove its greater benefits for downstream tasks compared with single-light images and fusion results by existing methods. All these experiments prove the superiority and effectiveness of our algorithm. MDPI 2022-05-11 /pmc/articles/PMC9144774/ /pubmed/35632059 http://dx.doi.org/10.3390/s22103651 Text en © 2022 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). |
spellingShingle | Article Zheng, Xin Yang, Qiyong Si, Pengbo Wu, Qiang A Multi-Stage Visible and Infrared Image Fusion Network Based on Attention Mechanism |
title | A Multi-Stage Visible and Infrared Image Fusion Network Based on Attention Mechanism |
title_full | A Multi-Stage Visible and Infrared Image Fusion Network Based on Attention Mechanism |
title_fullStr | A Multi-Stage Visible and Infrared Image Fusion Network Based on Attention Mechanism |
title_full_unstemmed | A Multi-Stage Visible and Infrared Image Fusion Network Based on Attention Mechanism |
title_short | A Multi-Stage Visible and Infrared Image Fusion Network Based on Attention Mechanism |
title_sort | multi-stage visible and infrared image fusion network based on attention mechanism |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9144774/ https://www.ncbi.nlm.nih.gov/pubmed/35632059 http://dx.doi.org/10.3390/s22103651 |
work_keys_str_mv | AT zhengxin amultistagevisibleandinfraredimagefusionnetworkbasedonattentionmechanism AT yangqiyong amultistagevisibleandinfraredimagefusionnetworkbasedonattentionmechanism AT sipengbo amultistagevisibleandinfraredimagefusionnetworkbasedonattentionmechanism AT wuqiang amultistagevisibleandinfraredimagefusionnetworkbasedonattentionmechanism AT zhengxin multistagevisibleandinfraredimagefusionnetworkbasedonattentionmechanism AT yangqiyong multistagevisibleandinfraredimagefusionnetworkbasedonattentionmechanism AT sipengbo multistagevisibleandinfraredimagefusionnetworkbasedonattentionmechanism AT wuqiang multistagevisibleandinfraredimagefusionnetworkbasedonattentionmechanism |