Cargando…

A layer-wise fusion network incorporating self-supervised learning for multimodal MR image synthesis

Magnetic resonance (MR) imaging plays an important role in medical diagnosis and treatment; different modalities of MR images can provide rich and complementary information to improve the accuracy of diagnosis. However, due to the limitations of scanning time and medical conditions, certain modaliti...

Descripción completa

Detalles Bibliográficos
Autores principales: Zhou, Qian, Zou, Hua
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Frontiers Media S.A. 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9396279/
https://www.ncbi.nlm.nih.gov/pubmed/36017492
http://dx.doi.org/10.3389/fgene.2022.937042
_version_ 1784771894991388672
author Zhou, Qian
Zou, Hua
author_facet Zhou, Qian
Zou, Hua
author_sort Zhou, Qian
collection PubMed
description Magnetic resonance (MR) imaging plays an important role in medical diagnosis and treatment; different modalities of MR images can provide rich and complementary information to improve the accuracy of diagnosis. However, due to the limitations of scanning time and medical conditions, certain modalities of MR may be unavailable or of low quality in clinical practice. In this study, we propose a new multimodal MR image synthesis network to generate missing MR images. The proposed model comprises three stages: feature extraction, feature fusion, and image generation. During feature extraction, 2D and 3D self-supervised pretext tasks are introduced to pre-train the backbone for better representations of each modality. Then, a channel attention mechanism is used when fusing features so that the network can adaptively weigh different fusion operations to learn common representations of all modalities. Finally, a generative adversarial network is considered as the basic framework to generate images, in which a feature-level edge information loss is combined with the pixel-wise loss to ensure consistency between the synthesized and real images in terms of anatomical characteristics. 2D and 3D self-supervised pre-training can have better performance on feature extraction to retain more details in the synthetic images. Moreover, the proposed multimodal attention feature fusion block (MAFFB) in the well-designed layer-wise fusion strategy can model both common and unique information in all modalities, consistent with the clinical analysis. We also perform an interpretability analysis to confirm the rationality and effectiveness of our method. The experimental results demonstrate that our method can be applied in both single-modal and multimodal synthesis with high robustness and outperforms other state-of-the-art approaches objectively and subjectively.
format Online
Article
Text
id pubmed-9396279
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher Frontiers Media S.A.
record_format MEDLINE/PubMed
spelling pubmed-93962792022-08-24 A layer-wise fusion network incorporating self-supervised learning for multimodal MR image synthesis Zhou, Qian Zou, Hua Front Genet Genetics Magnetic resonance (MR) imaging plays an important role in medical diagnosis and treatment; different modalities of MR images can provide rich and complementary information to improve the accuracy of diagnosis. However, due to the limitations of scanning time and medical conditions, certain modalities of MR may be unavailable or of low quality in clinical practice. In this study, we propose a new multimodal MR image synthesis network to generate missing MR images. The proposed model comprises three stages: feature extraction, feature fusion, and image generation. During feature extraction, 2D and 3D self-supervised pretext tasks are introduced to pre-train the backbone for better representations of each modality. Then, a channel attention mechanism is used when fusing features so that the network can adaptively weigh different fusion operations to learn common representations of all modalities. Finally, a generative adversarial network is considered as the basic framework to generate images, in which a feature-level edge information loss is combined with the pixel-wise loss to ensure consistency between the synthesized and real images in terms of anatomical characteristics. 2D and 3D self-supervised pre-training can have better performance on feature extraction to retain more details in the synthetic images. Moreover, the proposed multimodal attention feature fusion block (MAFFB) in the well-designed layer-wise fusion strategy can model both common and unique information in all modalities, consistent with the clinical analysis. We also perform an interpretability analysis to confirm the rationality and effectiveness of our method. The experimental results demonstrate that our method can be applied in both single-modal and multimodal synthesis with high robustness and outperforms other state-of-the-art approaches objectively and subjectively. Frontiers Media S.A. 2022-08-09 /pmc/articles/PMC9396279/ /pubmed/36017492 http://dx.doi.org/10.3389/fgene.2022.937042 Text en Copyright © 2022 Zhou and Zou. https://creativecommons.org/licenses/by/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
spellingShingle Genetics
Zhou, Qian
Zou, Hua
A layer-wise fusion network incorporating self-supervised learning for multimodal MR image synthesis
title A layer-wise fusion network incorporating self-supervised learning for multimodal MR image synthesis
title_full A layer-wise fusion network incorporating self-supervised learning for multimodal MR image synthesis
title_fullStr A layer-wise fusion network incorporating self-supervised learning for multimodal MR image synthesis
title_full_unstemmed A layer-wise fusion network incorporating self-supervised learning for multimodal MR image synthesis
title_short A layer-wise fusion network incorporating self-supervised learning for multimodal MR image synthesis
title_sort layer-wise fusion network incorporating self-supervised learning for multimodal mr image synthesis
topic Genetics
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9396279/
https://www.ncbi.nlm.nih.gov/pubmed/36017492
http://dx.doi.org/10.3389/fgene.2022.937042
work_keys_str_mv AT zhouqian alayerwisefusionnetworkincorporatingselfsupervisedlearningformultimodalmrimagesynthesis
AT zouhua alayerwisefusionnetworkincorporatingselfsupervisedlearningformultimodalmrimagesynthesis
AT zhouqian layerwisefusionnetworkincorporatingselfsupervisedlearningformultimodalmrimagesynthesis
AT zouhua layerwisefusionnetworkincorporatingselfsupervisedlearningformultimodalmrimagesynthesis