Cargando…

Multi-Scale Mixed Attention Network for CT and MRI Image Fusion

Recently, the rapid development of the Internet of Things has contributed to the generation of telemedicine. However, online diagnoses by doctors require the analyses of multiple multi-modal medical images, which are inconvenient and inefficient. Multi-modal medical image fusion is proposed to solve...

Descripción completa

Detalles Bibliográficos
Autores principales: Liu, Yang, Yan, Binyu, Zhang, Rongzhu, Liu, Kai, Jeon, Gwanggil, Yang, Xiaoming
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9222659/
https://www.ncbi.nlm.nih.gov/pubmed/35741563
http://dx.doi.org/10.3390/e24060843
_version_ 1784732920815026176
author Liu, Yang
Yan, Binyu
Zhang, Rongzhu
Liu, Kai
Jeon, Gwanggil
Yang, Xiaoming
author_facet Liu, Yang
Yan, Binyu
Zhang, Rongzhu
Liu, Kai
Jeon, Gwanggil
Yang, Xiaoming
author_sort Liu, Yang
collection PubMed
description Recently, the rapid development of the Internet of Things has contributed to the generation of telemedicine. However, online diagnoses by doctors require the analyses of multiple multi-modal medical images, which are inconvenient and inefficient. Multi-modal medical image fusion is proposed to solve this problem. Due to its outstanding feature extraction and representation capabilities, convolutional neural networks (CNNs) have been widely used in medical image fusion. However, most existing CNN-based medical image fusion methods calculate their weight maps by a simple weighted average strategy, which weakens the quality of fused images due to the effect of inessential information. In this paper, we propose a CNN-based CT and MRI image fusion method (MMAN), which adopts a visual saliency-based strategy to preserve more useful information. Firstly, a multi-scale mixed attention block is designed to extract features. This block can gather more helpful information and refine the extracted features both in the channel and spatial levels. Then, a visual saliency-based fusion strategy is used to fuse the feature maps. Finally, the fused image can be obtained via reconstruction blocks. The experimental results of our method preserve more textual details, clearer edge information and higher contrast when compared to other state-of-the-art methods.
format Online
Article
Text
id pubmed-9222659
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-92226592022-06-24 Multi-Scale Mixed Attention Network for CT and MRI Image Fusion Liu, Yang Yan, Binyu Zhang, Rongzhu Liu, Kai Jeon, Gwanggil Yang, Xiaoming Entropy (Basel) Article Recently, the rapid development of the Internet of Things has contributed to the generation of telemedicine. However, online diagnoses by doctors require the analyses of multiple multi-modal medical images, which are inconvenient and inefficient. Multi-modal medical image fusion is proposed to solve this problem. Due to its outstanding feature extraction and representation capabilities, convolutional neural networks (CNNs) have been widely used in medical image fusion. However, most existing CNN-based medical image fusion methods calculate their weight maps by a simple weighted average strategy, which weakens the quality of fused images due to the effect of inessential information. In this paper, we propose a CNN-based CT and MRI image fusion method (MMAN), which adopts a visual saliency-based strategy to preserve more useful information. Firstly, a multi-scale mixed attention block is designed to extract features. This block can gather more helpful information and refine the extracted features both in the channel and spatial levels. Then, a visual saliency-based fusion strategy is used to fuse the feature maps. Finally, the fused image can be obtained via reconstruction blocks. The experimental results of our method preserve more textual details, clearer edge information and higher contrast when compared to other state-of-the-art methods. MDPI 2022-06-19 /pmc/articles/PMC9222659/ /pubmed/35741563 http://dx.doi.org/10.3390/e24060843 Text en © 2022 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Liu, Yang
Yan, Binyu
Zhang, Rongzhu
Liu, Kai
Jeon, Gwanggil
Yang, Xiaoming
Multi-Scale Mixed Attention Network for CT and MRI Image Fusion
title Multi-Scale Mixed Attention Network for CT and MRI Image Fusion
title_full Multi-Scale Mixed Attention Network for CT and MRI Image Fusion
title_fullStr Multi-Scale Mixed Attention Network for CT and MRI Image Fusion
title_full_unstemmed Multi-Scale Mixed Attention Network for CT and MRI Image Fusion
title_short Multi-Scale Mixed Attention Network for CT and MRI Image Fusion
title_sort multi-scale mixed attention network for ct and mri image fusion
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9222659/
https://www.ncbi.nlm.nih.gov/pubmed/35741563
http://dx.doi.org/10.3390/e24060843
work_keys_str_mv AT liuyang multiscalemixedattentionnetworkforctandmriimagefusion
AT yanbinyu multiscalemixedattentionnetworkforctandmriimagefusion
AT zhangrongzhu multiscalemixedattentionnetworkforctandmriimagefusion
AT liukai multiscalemixedattentionnetworkforctandmriimagefusion
AT jeongwanggil multiscalemixedattentionnetworkforctandmriimagefusion
AT yangxiaoming multiscalemixedattentionnetworkforctandmriimagefusion