Cargando…

DRSNFuse: Deep Residual Shrinkage Network for Infrared and Visible Image Fusion

Infrared images are robust against illumination variation and disguises, containing the sharp edge contours of objects. Visible images are enriched with texture details. Infrared and visible image fusion seeks to obtain high-quality images, keeping the advantages of source images. This paper propose...

Descripción completa

Detalles Bibliográficos
Autores principales: Wang, Hongfeng, Wang, Jianzhong, Xu, Haonan, Sun, Yong, Yu, Zibo
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9318496/
https://www.ncbi.nlm.nih.gov/pubmed/35890828
http://dx.doi.org/10.3390/s22145149
_version_ 1784755305964371968
author Wang, Hongfeng
Wang, Jianzhong
Xu, Haonan
Sun, Yong
Yu, Zibo
author_facet Wang, Hongfeng
Wang, Jianzhong
Xu, Haonan
Sun, Yong
Yu, Zibo
author_sort Wang, Hongfeng
collection PubMed
description Infrared images are robust against illumination variation and disguises, containing the sharp edge contours of objects. Visible images are enriched with texture details. Infrared and visible image fusion seeks to obtain high-quality images, keeping the advantages of source images. This paper proposes an object-aware image fusion method based on a deep residual shrinkage network, termed as DRSNFuse. DRSNFuse exploits residual shrinkage blocks for image fusion and introduces a deeper network in infrared and visible image fusion tasks than existing methods based on fully convolutional networks. The deeper network can effectively extract semantic information, while the residual shrinkage blocks maintain the texture information throughout the whole network. The residual shrinkage blocks adapt a channel-wise attention mechanism to the fusion task, enabling feature map channels to focus on objects and backgrounds separately. A novel image fusion loss function is proposed to obtain better fusion performance and suppress artifacts. DRSNFuse trained with the proposed loss function can generate fused images with fewer artifacts and more original textures, which also satisfy the human visual system. Experiments show that our method has better fusion results than mainstream methods through quantitative comparison and obtains fused images with brighter targets, sharper edge contours, richer details, and fewer artifacts.
format Online
Article
Text
id pubmed-9318496
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-93184962022-07-27 DRSNFuse: Deep Residual Shrinkage Network for Infrared and Visible Image Fusion Wang, Hongfeng Wang, Jianzhong Xu, Haonan Sun, Yong Yu, Zibo Sensors (Basel) Article Infrared images are robust against illumination variation and disguises, containing the sharp edge contours of objects. Visible images are enriched with texture details. Infrared and visible image fusion seeks to obtain high-quality images, keeping the advantages of source images. This paper proposes an object-aware image fusion method based on a deep residual shrinkage network, termed as DRSNFuse. DRSNFuse exploits residual shrinkage blocks for image fusion and introduces a deeper network in infrared and visible image fusion tasks than existing methods based on fully convolutional networks. The deeper network can effectively extract semantic information, while the residual shrinkage blocks maintain the texture information throughout the whole network. The residual shrinkage blocks adapt a channel-wise attention mechanism to the fusion task, enabling feature map channels to focus on objects and backgrounds separately. A novel image fusion loss function is proposed to obtain better fusion performance and suppress artifacts. DRSNFuse trained with the proposed loss function can generate fused images with fewer artifacts and more original textures, which also satisfy the human visual system. Experiments show that our method has better fusion results than mainstream methods through quantitative comparison and obtains fused images with brighter targets, sharper edge contours, richer details, and fewer artifacts. MDPI 2022-07-08 /pmc/articles/PMC9318496/ /pubmed/35890828 http://dx.doi.org/10.3390/s22145149 Text en © 2022 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Wang, Hongfeng
Wang, Jianzhong
Xu, Haonan
Sun, Yong
Yu, Zibo
DRSNFuse: Deep Residual Shrinkage Network for Infrared and Visible Image Fusion
title DRSNFuse: Deep Residual Shrinkage Network for Infrared and Visible Image Fusion
title_full DRSNFuse: Deep Residual Shrinkage Network for Infrared and Visible Image Fusion
title_fullStr DRSNFuse: Deep Residual Shrinkage Network for Infrared and Visible Image Fusion
title_full_unstemmed DRSNFuse: Deep Residual Shrinkage Network for Infrared and Visible Image Fusion
title_short DRSNFuse: Deep Residual Shrinkage Network for Infrared and Visible Image Fusion
title_sort drsnfuse: deep residual shrinkage network for infrared and visible image fusion
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9318496/
https://www.ncbi.nlm.nih.gov/pubmed/35890828
http://dx.doi.org/10.3390/s22145149
work_keys_str_mv AT wanghongfeng drsnfusedeepresidualshrinkagenetworkforinfraredandvisibleimagefusion
AT wangjianzhong drsnfusedeepresidualshrinkagenetworkforinfraredandvisibleimagefusion
AT xuhaonan drsnfusedeepresidualshrinkagenetworkforinfraredandvisibleimagefusion
AT sunyong drsnfusedeepresidualshrinkagenetworkforinfraredandvisibleimagefusion
AT yuzibo drsnfusedeepresidualshrinkagenetworkforinfraredandvisibleimagefusion