Cargando…
Infrared and Visible Image Fusion via Attention-Based Adaptive Feature Fusion
Infrared and visible image fusion methods based on feature decomposition are able to generate good fused images. However, most of them employ manually designed simple feature fusion strategies in the reconstruction stage, such as addition or concatenation fusion strategies. These strategies do not p...
Autores principales: | , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
MDPI
2023
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10047768/ https://www.ncbi.nlm.nih.gov/pubmed/36981297 http://dx.doi.org/10.3390/e25030407 |
_version_ | 1785014009484804096 |
---|---|
author | Wang, Lei Hu, Ziming Kong, Quan Qi, Qian Liao, Qing |
author_facet | Wang, Lei Hu, Ziming Kong, Quan Qi, Qian Liao, Qing |
author_sort | Wang, Lei |
collection | PubMed |
description | Infrared and visible image fusion methods based on feature decomposition are able to generate good fused images. However, most of them employ manually designed simple feature fusion strategies in the reconstruction stage, such as addition or concatenation fusion strategies. These strategies do not pay attention to the relative importance between different features and thus may suffer from issues such as low-contrast, blurring results or information loss. To address this problem, we designed an adaptive fusion network to synthesize decoupled common structural features and distinct modal features under an attention-based adaptive fusion (AAF) strategy. The AAF module adaptively computes different weights assigned to different features according to their relative importance. Moreover, the structural features from different sources are also synthesized under the AAF strategy before reconstruction, to provide a more entire structure information. More important features are thus paid more attention to automatically and advantageous information contained in these features manifests itself more reasonably in the final fused images. Experiments on several datasets demonstrated an obvious improvement of image fusion quality using our method. |
format | Online Article Text |
id | pubmed-10047768 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2023 |
publisher | MDPI |
record_format | MEDLINE/PubMed |
spelling | pubmed-100477682023-03-29 Infrared and Visible Image Fusion via Attention-Based Adaptive Feature Fusion Wang, Lei Hu, Ziming Kong, Quan Qi, Qian Liao, Qing Entropy (Basel) Article Infrared and visible image fusion methods based on feature decomposition are able to generate good fused images. However, most of them employ manually designed simple feature fusion strategies in the reconstruction stage, such as addition or concatenation fusion strategies. These strategies do not pay attention to the relative importance between different features and thus may suffer from issues such as low-contrast, blurring results or information loss. To address this problem, we designed an adaptive fusion network to synthesize decoupled common structural features and distinct modal features under an attention-based adaptive fusion (AAF) strategy. The AAF module adaptively computes different weights assigned to different features according to their relative importance. Moreover, the structural features from different sources are also synthesized under the AAF strategy before reconstruction, to provide a more entire structure information. More important features are thus paid more attention to automatically and advantageous information contained in these features manifests itself more reasonably in the final fused images. Experiments on several datasets demonstrated an obvious improvement of image fusion quality using our method. MDPI 2023-02-23 /pmc/articles/PMC10047768/ /pubmed/36981297 http://dx.doi.org/10.3390/e25030407 Text en © 2023 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). |
spellingShingle | Article Wang, Lei Hu, Ziming Kong, Quan Qi, Qian Liao, Qing Infrared and Visible Image Fusion via Attention-Based Adaptive Feature Fusion |
title | Infrared and Visible Image Fusion via Attention-Based Adaptive Feature Fusion |
title_full | Infrared and Visible Image Fusion via Attention-Based Adaptive Feature Fusion |
title_fullStr | Infrared and Visible Image Fusion via Attention-Based Adaptive Feature Fusion |
title_full_unstemmed | Infrared and Visible Image Fusion via Attention-Based Adaptive Feature Fusion |
title_short | Infrared and Visible Image Fusion via Attention-Based Adaptive Feature Fusion |
title_sort | infrared and visible image fusion via attention-based adaptive feature fusion |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10047768/ https://www.ncbi.nlm.nih.gov/pubmed/36981297 http://dx.doi.org/10.3390/e25030407 |
work_keys_str_mv | AT wanglei infraredandvisibleimagefusionviaattentionbasedadaptivefeaturefusion AT huziming infraredandvisibleimagefusionviaattentionbasedadaptivefeaturefusion AT kongquan infraredandvisibleimagefusionviaattentionbasedadaptivefeaturefusion AT qiqian infraredandvisibleimagefusionviaattentionbasedadaptivefeaturefusion AT liaoqing infraredandvisibleimagefusionviaattentionbasedadaptivefeaturefusion |