Cargando…
A Two-To-One Deep Learning General Framework for Image Fusion
The image fusion algorithm has great application value in the domain of computer vision, which makes the fused image have a more comprehensive and clearer description of the scene, and is beneficial to human eye recognition and automatic mechanical detection. In recent years, image fusion algorithms...
Autores principales: | , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Frontiers Media S.A.
2022
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9376963/ https://www.ncbi.nlm.nih.gov/pubmed/35979172 http://dx.doi.org/10.3389/fbioe.2022.923364 |
_version_ | 1784768244576419840 |
---|---|
author | Zhu, Pan Ouyang, Wanqi Guo, Yongxing Zhou, Xinglin |
author_facet | Zhu, Pan Ouyang, Wanqi Guo, Yongxing Zhou, Xinglin |
author_sort | Zhu, Pan |
collection | PubMed |
description | The image fusion algorithm has great application value in the domain of computer vision, which makes the fused image have a more comprehensive and clearer description of the scene, and is beneficial to human eye recognition and automatic mechanical detection. In recent years, image fusion algorithms have achieved great success in different domains. However, it still has huge challenges in terms of the generalization of multi-modal image fusion. In reaction to this problem, this paper proposes a general image fusion framework based on an improved convolutional neural network. Firstly, the feature information of the input image is captured by the multiple feature extraction layers, and then multiple feature maps are stacked along the number of channels to acquire the feature fusion map. Finally, feature maps, which are derived from multiple feature extraction layers, are stacked in high dimensions by skip connection and convolution filtering for reconstruction to produce the final result. In this paper, multi-modal images are gained from multiple datasets to produce a large sample space to adequately train the network. Compared with the existing convolutional neural networks and traditional fusion algorithms, the proposed model not only has generality and stability but also has some strengths in subjective visualization and objective evaluation, while the average running time is at least 94% faster than the reference algorithm based on neural network. |
format | Online Article Text |
id | pubmed-9376963 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2022 |
publisher | Frontiers Media S.A. |
record_format | MEDLINE/PubMed |
spelling | pubmed-93769632022-08-16 A Two-To-One Deep Learning General Framework for Image Fusion Zhu, Pan Ouyang, Wanqi Guo, Yongxing Zhou, Xinglin Front Bioeng Biotechnol Bioengineering and Biotechnology The image fusion algorithm has great application value in the domain of computer vision, which makes the fused image have a more comprehensive and clearer description of the scene, and is beneficial to human eye recognition and automatic mechanical detection. In recent years, image fusion algorithms have achieved great success in different domains. However, it still has huge challenges in terms of the generalization of multi-modal image fusion. In reaction to this problem, this paper proposes a general image fusion framework based on an improved convolutional neural network. Firstly, the feature information of the input image is captured by the multiple feature extraction layers, and then multiple feature maps are stacked along the number of channels to acquire the feature fusion map. Finally, feature maps, which are derived from multiple feature extraction layers, are stacked in high dimensions by skip connection and convolution filtering for reconstruction to produce the final result. In this paper, multi-modal images are gained from multiple datasets to produce a large sample space to adequately train the network. Compared with the existing convolutional neural networks and traditional fusion algorithms, the proposed model not only has generality and stability but also has some strengths in subjective visualization and objective evaluation, while the average running time is at least 94% faster than the reference algorithm based on neural network. Frontiers Media S.A. 2022-07-14 /pmc/articles/PMC9376963/ /pubmed/35979172 http://dx.doi.org/10.3389/fbioe.2022.923364 Text en Copyright © 2022 Zhu, Ouyang, Guo and Zhou. https://creativecommons.org/licenses/by/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms. |
spellingShingle | Bioengineering and Biotechnology Zhu, Pan Ouyang, Wanqi Guo, Yongxing Zhou, Xinglin A Two-To-One Deep Learning General Framework for Image Fusion |
title | A Two-To-One Deep Learning General Framework for Image Fusion |
title_full | A Two-To-One Deep Learning General Framework for Image Fusion |
title_fullStr | A Two-To-One Deep Learning General Framework for Image Fusion |
title_full_unstemmed | A Two-To-One Deep Learning General Framework for Image Fusion |
title_short | A Two-To-One Deep Learning General Framework for Image Fusion |
title_sort | two-to-one deep learning general framework for image fusion |
topic | Bioengineering and Biotechnology |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9376963/ https://www.ncbi.nlm.nih.gov/pubmed/35979172 http://dx.doi.org/10.3389/fbioe.2022.923364 |
work_keys_str_mv | AT zhupan atwotoonedeeplearninggeneralframeworkforimagefusion AT ouyangwanqi atwotoonedeeplearninggeneralframeworkforimagefusion AT guoyongxing atwotoonedeeplearninggeneralframeworkforimagefusion AT zhouxinglin atwotoonedeeplearninggeneralframeworkforimagefusion AT zhupan twotoonedeeplearninggeneralframeworkforimagefusion AT ouyangwanqi twotoonedeeplearninggeneralframeworkforimagefusion AT guoyongxing twotoonedeeplearninggeneralframeworkforimagefusion AT zhouxinglin twotoonedeeplearninggeneralframeworkforimagefusion |