Cargando…

DSA-Net: Infrared and Visible Image Fusion via Dual-Stream Asymmetric Network

Infrared and visible image fusion technologies are used to characterize the same scene using diverse modalities. However, most existing deep learning-based fusion methods are designed as symmetric networks, which ignore the differences between modal images and lead to source image information loss d...

Descripción completa

Detalles Bibliográficos
Autores principales: Yin, Ruyi, Yang, Bin, Huang, Zuyan, Zhang, Xiaozhi
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10459630/
https://www.ncbi.nlm.nih.gov/pubmed/37631634
http://dx.doi.org/10.3390/s23167097
_version_ 1785097457770692608
author Yin, Ruyi
Yang, Bin
Huang, Zuyan
Zhang, Xiaozhi
author_facet Yin, Ruyi
Yang, Bin
Huang, Zuyan
Zhang, Xiaozhi
author_sort Yin, Ruyi
collection PubMed
description Infrared and visible image fusion technologies are used to characterize the same scene using diverse modalities. However, most existing deep learning-based fusion methods are designed as symmetric networks, which ignore the differences between modal images and lead to source image information loss during feature extraction. In this paper, we propose a new fusion framework for the different characteristics of infrared and visible images. Specifically, we design a dual-stream asymmetric network with two different feature extraction networks to extract infrared and visible feature maps, respectively. The transformer architecture is introduced in the infrared feature extraction branch, which can force the network to focus on the local features of infrared images while still obtaining their contextual information. The visible feature extraction branch uses residual dense blocks to fully extract the rich background and texture detail information of visible images. In this way, it can provide better infrared targets and visible details for the fused image. Experimental results on multiple datasets indicate that DSA-Net outperforms state-of-the-art methods in both qualitative and quantitative evaluations. In addition, we also apply the fusion results to the target detection task, which indirectly demonstrates the fusion performances of our method.
format Online
Article
Text
id pubmed-10459630
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-104596302023-08-27 DSA-Net: Infrared and Visible Image Fusion via Dual-Stream Asymmetric Network Yin, Ruyi Yang, Bin Huang, Zuyan Zhang, Xiaozhi Sensors (Basel) Article Infrared and visible image fusion technologies are used to characterize the same scene using diverse modalities. However, most existing deep learning-based fusion methods are designed as symmetric networks, which ignore the differences between modal images and lead to source image information loss during feature extraction. In this paper, we propose a new fusion framework for the different characteristics of infrared and visible images. Specifically, we design a dual-stream asymmetric network with two different feature extraction networks to extract infrared and visible feature maps, respectively. The transformer architecture is introduced in the infrared feature extraction branch, which can force the network to focus on the local features of infrared images while still obtaining their contextual information. The visible feature extraction branch uses residual dense blocks to fully extract the rich background and texture detail information of visible images. In this way, it can provide better infrared targets and visible details for the fused image. Experimental results on multiple datasets indicate that DSA-Net outperforms state-of-the-art methods in both qualitative and quantitative evaluations. In addition, we also apply the fusion results to the target detection task, which indirectly demonstrates the fusion performances of our method. MDPI 2023-08-11 /pmc/articles/PMC10459630/ /pubmed/37631634 http://dx.doi.org/10.3390/s23167097 Text en © 2023 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Yin, Ruyi
Yang, Bin
Huang, Zuyan
Zhang, Xiaozhi
DSA-Net: Infrared and Visible Image Fusion via Dual-Stream Asymmetric Network
title DSA-Net: Infrared and Visible Image Fusion via Dual-Stream Asymmetric Network
title_full DSA-Net: Infrared and Visible Image Fusion via Dual-Stream Asymmetric Network
title_fullStr DSA-Net: Infrared and Visible Image Fusion via Dual-Stream Asymmetric Network
title_full_unstemmed DSA-Net: Infrared and Visible Image Fusion via Dual-Stream Asymmetric Network
title_short DSA-Net: Infrared and Visible Image Fusion via Dual-Stream Asymmetric Network
title_sort dsa-net: infrared and visible image fusion via dual-stream asymmetric network
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10459630/
https://www.ncbi.nlm.nih.gov/pubmed/37631634
http://dx.doi.org/10.3390/s23167097
work_keys_str_mv AT yinruyi dsanetinfraredandvisibleimagefusionviadualstreamasymmetricnetwork
AT yangbin dsanetinfraredandvisibleimagefusionviadualstreamasymmetricnetwork
AT huangzuyan dsanetinfraredandvisibleimagefusionviadualstreamasymmetricnetwork
AT zhangxiaozhi dsanetinfraredandvisibleimagefusionviadualstreamasymmetricnetwork