Cargando…

General Image Fusion for an Arbitrary Number of Inputs Using Convolutional Neural Networks

In this paper, we propose a unified and flexible framework for general image fusion tasks, including multi-exposure image fusion, multi-focus image fusion, infrared/visible image fusion, and multi-modality medical image fusion. Unlike other deep learning-based image fusion methods applied to a fixed...

Descripción completa

Detalles Bibliográficos
Autores principales: Xiao, Yifan, Guo, Zhixin, Veelaert, Peter, Philips, Wilfried
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9002723/
https://www.ncbi.nlm.nih.gov/pubmed/35408072
http://dx.doi.org/10.3390/s22072457
_version_ 1784685958468206592
author Xiao, Yifan
Guo, Zhixin
Veelaert, Peter
Philips, Wilfried
author_facet Xiao, Yifan
Guo, Zhixin
Veelaert, Peter
Philips, Wilfried
author_sort Xiao, Yifan
collection PubMed
description In this paper, we propose a unified and flexible framework for general image fusion tasks, including multi-exposure image fusion, multi-focus image fusion, infrared/visible image fusion, and multi-modality medical image fusion. Unlike other deep learning-based image fusion methods applied to a fixed number of input sources (normally two inputs), the proposed framework can simultaneously handle an arbitrary number of inputs. Specifically, we use the symmetrical function (e.g., Max-pooling) to extract the most significant features from all the input images, which are then fused with the respective features from each input source. This symmetry function enables permutation-invariance of the network, which means the network can successfully extract and fuse the saliency features of each image without needing to remember the input order of the inputs. The property of permutation-invariance also brings convenience for the network during inference with unfixed inputs. To handle multiple image fusion tasks with one unified framework, we adopt continual learning based on Elastic Weight Consolidation (EWC) for different fusion tasks. Subjective and objective experiments on several public datasets demonstrate that the proposed method outperforms state-of-the-art methods on multiple image fusion tasks.
format Online
Article
Text
id pubmed-9002723
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-90027232022-04-13 General Image Fusion for an Arbitrary Number of Inputs Using Convolutional Neural Networks Xiao, Yifan Guo, Zhixin Veelaert, Peter Philips, Wilfried Sensors (Basel) Article In this paper, we propose a unified and flexible framework for general image fusion tasks, including multi-exposure image fusion, multi-focus image fusion, infrared/visible image fusion, and multi-modality medical image fusion. Unlike other deep learning-based image fusion methods applied to a fixed number of input sources (normally two inputs), the proposed framework can simultaneously handle an arbitrary number of inputs. Specifically, we use the symmetrical function (e.g., Max-pooling) to extract the most significant features from all the input images, which are then fused with the respective features from each input source. This symmetry function enables permutation-invariance of the network, which means the network can successfully extract and fuse the saliency features of each image without needing to remember the input order of the inputs. The property of permutation-invariance also brings convenience for the network during inference with unfixed inputs. To handle multiple image fusion tasks with one unified framework, we adopt continual learning based on Elastic Weight Consolidation (EWC) for different fusion tasks. Subjective and objective experiments on several public datasets demonstrate that the proposed method outperforms state-of-the-art methods on multiple image fusion tasks. MDPI 2022-03-23 /pmc/articles/PMC9002723/ /pubmed/35408072 http://dx.doi.org/10.3390/s22072457 Text en © 2022 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Xiao, Yifan
Guo, Zhixin
Veelaert, Peter
Philips, Wilfried
General Image Fusion for an Arbitrary Number of Inputs Using Convolutional Neural Networks
title General Image Fusion for an Arbitrary Number of Inputs Using Convolutional Neural Networks
title_full General Image Fusion for an Arbitrary Number of Inputs Using Convolutional Neural Networks
title_fullStr General Image Fusion for an Arbitrary Number of Inputs Using Convolutional Neural Networks
title_full_unstemmed General Image Fusion for an Arbitrary Number of Inputs Using Convolutional Neural Networks
title_short General Image Fusion for an Arbitrary Number of Inputs Using Convolutional Neural Networks
title_sort general image fusion for an arbitrary number of inputs using convolutional neural networks
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9002723/
https://www.ncbi.nlm.nih.gov/pubmed/35408072
http://dx.doi.org/10.3390/s22072457
work_keys_str_mv AT xiaoyifan generalimagefusionforanarbitrarynumberofinputsusingconvolutionalneuralnetworks
AT guozhixin generalimagefusionforanarbitrarynumberofinputsusingconvolutionalneuralnetworks
AT veelaertpeter generalimagefusionforanarbitrarynumberofinputsusingconvolutionalneuralnetworks
AT philipswilfried generalimagefusionforanarbitrarynumberofinputsusingconvolutionalneuralnetworks