Cargando…

Focal cross transformer: multi-view brain tumor segmentation model based on cross window and focal self-attention

INTRODUCTION: Recently, the Transformer model and its variants have been a great success in terms of computer vision, and have surpassed the performance of convolutional neural networks (CNN). The key to the success of Transformer vision is the acquisition of short-term and long-term visual dependen...

Descripción completa

Detalles Bibliográficos
Autores principales: Zongren, Li, Silamu, Wushouer, Shurui, Feng, Guanghui, Yan
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Frontiers Media S.A. 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10213430/
https://www.ncbi.nlm.nih.gov/pubmed/37250393
http://dx.doi.org/10.3389/fnins.2023.1192867
_version_ 1785047621702778880
author Zongren, Li
Silamu, Wushouer
Shurui, Feng
Guanghui, Yan
author_facet Zongren, Li
Silamu, Wushouer
Shurui, Feng
Guanghui, Yan
author_sort Zongren, Li
collection PubMed
description INTRODUCTION: Recently, the Transformer model and its variants have been a great success in terms of computer vision, and have surpassed the performance of convolutional neural networks (CNN). The key to the success of Transformer vision is the acquisition of short-term and long-term visual dependencies through self-attention mechanisms; this technology can efficiently learn global and remote semantic information interactions. However, there are certain challenges associated with the use of Transformers. The computational cost of the global self-attention mechanism increases quadratically, thus hindering the application of Transformers for high-resolution images. METHODS: In view of this, this paper proposes a multi-view brain tumor segmentation model based on cross windows and focal self-attention which represents a novel mechanism to enlarge the receptive field by parallel cross windows and improve global dependence by using local fine-grained and global coarse-grained interactions. First, the receiving field is increased by parallelizing the self-attention of horizontal and vertical fringes in the cross window, thus achieving strong modeling capability while limiting the computational cost. Second, the focus on self-attention with regards to local fine-grained and global coarse-grained interactions enables the model to capture short-term and long-term visual dependencies in an efficient manner. RESULTS: Finally, the performance of the model on Brats2021 verification set is as follows: dice Similarity Score of 87.28, 87.35 and 93.28%; Hausdorff Distance (95%) of 4.58 mm, 5.26 mm, 3.78 mm for the enhancing tumor, tumor core and whole tumor, respectively. DISCUSSION: In summary, the model proposed in this paper has achieved excellent performance while limiting the computational cost.
format Online
Article
Text
id pubmed-10213430
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher Frontiers Media S.A.
record_format MEDLINE/PubMed
spelling pubmed-102134302023-05-27 Focal cross transformer: multi-view brain tumor segmentation model based on cross window and focal self-attention Zongren, Li Silamu, Wushouer Shurui, Feng Guanghui, Yan Front Neurosci Neuroscience INTRODUCTION: Recently, the Transformer model and its variants have been a great success in terms of computer vision, and have surpassed the performance of convolutional neural networks (CNN). The key to the success of Transformer vision is the acquisition of short-term and long-term visual dependencies through self-attention mechanisms; this technology can efficiently learn global and remote semantic information interactions. However, there are certain challenges associated with the use of Transformers. The computational cost of the global self-attention mechanism increases quadratically, thus hindering the application of Transformers for high-resolution images. METHODS: In view of this, this paper proposes a multi-view brain tumor segmentation model based on cross windows and focal self-attention which represents a novel mechanism to enlarge the receptive field by parallel cross windows and improve global dependence by using local fine-grained and global coarse-grained interactions. First, the receiving field is increased by parallelizing the self-attention of horizontal and vertical fringes in the cross window, thus achieving strong modeling capability while limiting the computational cost. Second, the focus on self-attention with regards to local fine-grained and global coarse-grained interactions enables the model to capture short-term and long-term visual dependencies in an efficient manner. RESULTS: Finally, the performance of the model on Brats2021 verification set is as follows: dice Similarity Score of 87.28, 87.35 and 93.28%; Hausdorff Distance (95%) of 4.58 mm, 5.26 mm, 3.78 mm for the enhancing tumor, tumor core and whole tumor, respectively. DISCUSSION: In summary, the model proposed in this paper has achieved excellent performance while limiting the computational cost. Frontiers Media S.A. 2023-05-12 /pmc/articles/PMC10213430/ /pubmed/37250393 http://dx.doi.org/10.3389/fnins.2023.1192867 Text en Copyright © 2023 Zongren, Silamu, Shurui and Guanghui. https://creativecommons.org/licenses/by/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
spellingShingle Neuroscience
Zongren, Li
Silamu, Wushouer
Shurui, Feng
Guanghui, Yan
Focal cross transformer: multi-view brain tumor segmentation model based on cross window and focal self-attention
title Focal cross transformer: multi-view brain tumor segmentation model based on cross window and focal self-attention
title_full Focal cross transformer: multi-view brain tumor segmentation model based on cross window and focal self-attention
title_fullStr Focal cross transformer: multi-view brain tumor segmentation model based on cross window and focal self-attention
title_full_unstemmed Focal cross transformer: multi-view brain tumor segmentation model based on cross window and focal self-attention
title_short Focal cross transformer: multi-view brain tumor segmentation model based on cross window and focal self-attention
title_sort focal cross transformer: multi-view brain tumor segmentation model based on cross window and focal self-attention
topic Neuroscience
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10213430/
https://www.ncbi.nlm.nih.gov/pubmed/37250393
http://dx.doi.org/10.3389/fnins.2023.1192867
work_keys_str_mv AT zongrenli focalcrosstransformermultiviewbraintumorsegmentationmodelbasedoncrosswindowandfocalselfattention
AT silamuwushouer focalcrosstransformermultiviewbraintumorsegmentationmodelbasedoncrosswindowandfocalselfattention
AT shuruifeng focalcrosstransformermultiviewbraintumorsegmentationmodelbasedoncrosswindowandfocalselfattention
AT guanghuiyan focalcrosstransformermultiviewbraintumorsegmentationmodelbasedoncrosswindowandfocalselfattention