Cargando…

Multi-modality self-attention aware deep network for 3D biomedical segmentation

BACKGROUND: Deep learning based on segmentation models have been gradually applied in biomedical images and achieved state-of-the-art performance for 3D biomedical segmentation. However, most of existing biomedical segmentation researches take account of the application cases with adapting a single...

Descripción completa

Detalles Bibliográficos
Autores principales: Jia, Xibin, Liu, Yunfeng, Yang, Zhenghan, Yang, Dawei
Formato: Online Artículo Texto
Lenguaje:English
Publicado: BioMed Central 2020
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7346322/
https://www.ncbi.nlm.nih.gov/pubmed/32646419
http://dx.doi.org/10.1186/s12911-020-1109-0
_version_ 1783556383819431936
author Jia, Xibin
Liu, Yunfeng
Yang, Zhenghan
Yang, Dawei
author_facet Jia, Xibin
Liu, Yunfeng
Yang, Zhenghan
Yang, Dawei
author_sort Jia, Xibin
collection PubMed
description BACKGROUND: Deep learning based on segmentation models have been gradually applied in biomedical images and achieved state-of-the-art performance for 3D biomedical segmentation. However, most of existing biomedical segmentation researches take account of the application cases with adapting a single type of medical images from the corresponding examining method. Considering of practical clinic application of the radiology examination for diseases, the multiple image examination methods are normally required for final diagnosis especially in some severe diseases like cancers. Therefore, by considering the cases of employing multi-modal images and exploring the effective multi-modality fusion based on deep networks, we do the research to make full use of complementary information of multi-modal images referring to the clinic experiences of radiologists in image analysis. METHODS: Referring to the human radiologist diagnosis experience, we discuss and propose a new self-attention aware mechanism to improve the segmentation performance by paying the different attention on different modal images and different symptoms. Firstly, we propose a multi-path encoder and decoder deep network for 3D biomedical segmentation. Secondly, to leverage the complementary information among different modalities, we introduce a structure of attention mechanism called the Multi-Modality Self-Attention Aware (MMSA) convolution. Multi-modal images we used in the paper are different modalities of MR scanning images, which are input into the network separately. Then self-attention weight fusion of multi-modal features is performed with our proposed MMSA, which can adaptively adjust the fusion weights according to the learned contribution degree of different modalities and different features revealing the different symptoms from the labeled data. RESULTS: Experiments have been done on the public competition dataset BRATS-2015. The results show that our proposed method achieves dice scores of 0.8726, 0.6563, 0.8313 for the whole tumor, the tumor core and the enhancing tumor core, respectively. Comparing with the U-Net with SE block, the scores are increased by 0.0212,0.031,0.0304. CONCLUSIONS: We present a multi-modality self-attention aware convolution, which have better segmentation results based on the adaptive weighting fusion mechanism with exploiting the multiple medical image modalities. Experimental results demonstrate the effectiveness of our method and prominent application in the multi-modality fusion based medical image analysis.
format Online
Article
Text
id pubmed-7346322
institution National Center for Biotechnology Information
language English
publishDate 2020
publisher BioMed Central
record_format MEDLINE/PubMed
spelling pubmed-73463222020-07-14 Multi-modality self-attention aware deep network for 3D biomedical segmentation Jia, Xibin Liu, Yunfeng Yang, Zhenghan Yang, Dawei BMC Med Inform Decis Mak Research BACKGROUND: Deep learning based on segmentation models have been gradually applied in biomedical images and achieved state-of-the-art performance for 3D biomedical segmentation. However, most of existing biomedical segmentation researches take account of the application cases with adapting a single type of medical images from the corresponding examining method. Considering of practical clinic application of the radiology examination for diseases, the multiple image examination methods are normally required for final diagnosis especially in some severe diseases like cancers. Therefore, by considering the cases of employing multi-modal images and exploring the effective multi-modality fusion based on deep networks, we do the research to make full use of complementary information of multi-modal images referring to the clinic experiences of radiologists in image analysis. METHODS: Referring to the human radiologist diagnosis experience, we discuss and propose a new self-attention aware mechanism to improve the segmentation performance by paying the different attention on different modal images and different symptoms. Firstly, we propose a multi-path encoder and decoder deep network for 3D biomedical segmentation. Secondly, to leverage the complementary information among different modalities, we introduce a structure of attention mechanism called the Multi-Modality Self-Attention Aware (MMSA) convolution. Multi-modal images we used in the paper are different modalities of MR scanning images, which are input into the network separately. Then self-attention weight fusion of multi-modal features is performed with our proposed MMSA, which can adaptively adjust the fusion weights according to the learned contribution degree of different modalities and different features revealing the different symptoms from the labeled data. RESULTS: Experiments have been done on the public competition dataset BRATS-2015. The results show that our proposed method achieves dice scores of 0.8726, 0.6563, 0.8313 for the whole tumor, the tumor core and the enhancing tumor core, respectively. Comparing with the U-Net with SE block, the scores are increased by 0.0212,0.031,0.0304. CONCLUSIONS: We present a multi-modality self-attention aware convolution, which have better segmentation results based on the adaptive weighting fusion mechanism with exploiting the multiple medical image modalities. Experimental results demonstrate the effectiveness of our method and prominent application in the multi-modality fusion based medical image analysis. BioMed Central 2020-07-09 /pmc/articles/PMC7346322/ /pubmed/32646419 http://dx.doi.org/10.1186/s12911-020-1109-0 Text en © The Author(s). 2020 Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.
spellingShingle Research
Jia, Xibin
Liu, Yunfeng
Yang, Zhenghan
Yang, Dawei
Multi-modality self-attention aware deep network for 3D biomedical segmentation
title Multi-modality self-attention aware deep network for 3D biomedical segmentation
title_full Multi-modality self-attention aware deep network for 3D biomedical segmentation
title_fullStr Multi-modality self-attention aware deep network for 3D biomedical segmentation
title_full_unstemmed Multi-modality self-attention aware deep network for 3D biomedical segmentation
title_short Multi-modality self-attention aware deep network for 3D biomedical segmentation
title_sort multi-modality self-attention aware deep network for 3d biomedical segmentation
topic Research
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7346322/
https://www.ncbi.nlm.nih.gov/pubmed/32646419
http://dx.doi.org/10.1186/s12911-020-1109-0
work_keys_str_mv AT jiaxibin multimodalityselfattentionawaredeepnetworkfor3dbiomedicalsegmentation
AT liuyunfeng multimodalityselfattentionawaredeepnetworkfor3dbiomedicalsegmentation
AT yangzhenghan multimodalityselfattentionawaredeepnetworkfor3dbiomedicalsegmentation
AT yangdawei multimodalityselfattentionawaredeepnetworkfor3dbiomedicalsegmentation