Cargando…

Brain tumor segmentation in multimodal MRI via pixel-level and feature-level image fusion

Brain tumor segmentation in multimodal MRI volumes is of great significance to disease diagnosis, treatment planning, survival prediction and other relevant tasks. However, most existing brain tumor segmentation methods fail to make sufficient use of multimodal information. The most common way is to...

Descripción completa

Detalles Bibliográficos
Autores principales: Liu, Yu, Mu, Fuhao, Shi, Yu, Cheng, Juan, Li, Chang, Chen, Xun
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Frontiers Media S.A. 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9515796/
https://www.ncbi.nlm.nih.gov/pubmed/36188482
http://dx.doi.org/10.3389/fnins.2022.1000587
_version_ 1784798567338082304
author Liu, Yu
Mu, Fuhao
Shi, Yu
Cheng, Juan
Li, Chang
Chen, Xun
author_facet Liu, Yu
Mu, Fuhao
Shi, Yu
Cheng, Juan
Li, Chang
Chen, Xun
author_sort Liu, Yu
collection PubMed
description Brain tumor segmentation in multimodal MRI volumes is of great significance to disease diagnosis, treatment planning, survival prediction and other relevant tasks. However, most existing brain tumor segmentation methods fail to make sufficient use of multimodal information. The most common way is to simply stack the original multimodal images or their low-level features as the model input, and many methods treat each modality data with equal importance to a given segmentation target. In this paper, we introduce multimodal image fusion technique including both pixel-level fusion and feature-level fusion for brain tumor segmentation, aiming to achieve more sufficient and finer utilization of multimodal information. At the pixel level, we present a convolutional network named PIF-Net for 3D MR image fusion to enrich the input modalities of the segmentation model. The fused modalities can strengthen the association among different types of pathological information captured by multiple source modalities, leading to a modality enhancement effect. At the feature level, we design an attention-based modality selection feature fusion (MSFF) module for multimodal feature refinement to address the difference among multiple modalities for a given segmentation target. A two-stage brain tumor segmentation framework is accordingly proposed based on the above components and the popular V-Net model. Experiments are conducted on the BraTS 2019 and BraTS 2020 benchmarks. The results demonstrate that the proposed components on both pixel-level and feature-level fusion can effectively improve the segmentation accuracy of brain tumors.
format Online
Article
Text
id pubmed-9515796
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher Frontiers Media S.A.
record_format MEDLINE/PubMed
spelling pubmed-95157962022-09-29 Brain tumor segmentation in multimodal MRI via pixel-level and feature-level image fusion Liu, Yu Mu, Fuhao Shi, Yu Cheng, Juan Li, Chang Chen, Xun Front Neurosci Neuroscience Brain tumor segmentation in multimodal MRI volumes is of great significance to disease diagnosis, treatment planning, survival prediction and other relevant tasks. However, most existing brain tumor segmentation methods fail to make sufficient use of multimodal information. The most common way is to simply stack the original multimodal images or their low-level features as the model input, and many methods treat each modality data with equal importance to a given segmentation target. In this paper, we introduce multimodal image fusion technique including both pixel-level fusion and feature-level fusion for brain tumor segmentation, aiming to achieve more sufficient and finer utilization of multimodal information. At the pixel level, we present a convolutional network named PIF-Net for 3D MR image fusion to enrich the input modalities of the segmentation model. The fused modalities can strengthen the association among different types of pathological information captured by multiple source modalities, leading to a modality enhancement effect. At the feature level, we design an attention-based modality selection feature fusion (MSFF) module for multimodal feature refinement to address the difference among multiple modalities for a given segmentation target. A two-stage brain tumor segmentation framework is accordingly proposed based on the above components and the popular V-Net model. Experiments are conducted on the BraTS 2019 and BraTS 2020 benchmarks. The results demonstrate that the proposed components on both pixel-level and feature-level fusion can effectively improve the segmentation accuracy of brain tumors. Frontiers Media S.A. 2022-09-14 /pmc/articles/PMC9515796/ /pubmed/36188482 http://dx.doi.org/10.3389/fnins.2022.1000587 Text en Copyright © 2022 Liu, Mu, Shi, Cheng, Li and Chen. https://creativecommons.org/licenses/by/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
spellingShingle Neuroscience
Liu, Yu
Mu, Fuhao
Shi, Yu
Cheng, Juan
Li, Chang
Chen, Xun
Brain tumor segmentation in multimodal MRI via pixel-level and feature-level image fusion
title Brain tumor segmentation in multimodal MRI via pixel-level and feature-level image fusion
title_full Brain tumor segmentation in multimodal MRI via pixel-level and feature-level image fusion
title_fullStr Brain tumor segmentation in multimodal MRI via pixel-level and feature-level image fusion
title_full_unstemmed Brain tumor segmentation in multimodal MRI via pixel-level and feature-level image fusion
title_short Brain tumor segmentation in multimodal MRI via pixel-level and feature-level image fusion
title_sort brain tumor segmentation in multimodal mri via pixel-level and feature-level image fusion
topic Neuroscience
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9515796/
https://www.ncbi.nlm.nih.gov/pubmed/36188482
http://dx.doi.org/10.3389/fnins.2022.1000587
work_keys_str_mv AT liuyu braintumorsegmentationinmultimodalmriviapixellevelandfeaturelevelimagefusion
AT mufuhao braintumorsegmentationinmultimodalmriviapixellevelandfeaturelevelimagefusion
AT shiyu braintumorsegmentationinmultimodalmriviapixellevelandfeaturelevelimagefusion
AT chengjuan braintumorsegmentationinmultimodalmriviapixellevelandfeaturelevelimagefusion
AT lichang braintumorsegmentationinmultimodalmriviapixellevelandfeaturelevelimagefusion
AT chenxun braintumorsegmentationinmultimodalmriviapixellevelandfeaturelevelimagefusion