Cargando…

Attention 3D U-Net with Multiple Skip Connections for Segmentation of Brain Tumor Images

Among researchers using traditional and new machine learning and deep learning techniques, 2D medical image segmentation models are popular. Additionally, 3D volumetric data recently became more accessible, as a result of the high number of studies conducted in recent years regarding the creation of...

Descripción completa

Detalles Bibliográficos
Autores principales: Nodirov, Jakhongir, Abdusalomov, Akmalbek Bobomirzaevich, Whangbo, Taeg Keun
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9460422/
https://www.ncbi.nlm.nih.gov/pubmed/36080958
http://dx.doi.org/10.3390/s22176501
Descripción
Sumario:Among researchers using traditional and new machine learning and deep learning techniques, 2D medical image segmentation models are popular. Additionally, 3D volumetric data recently became more accessible, as a result of the high number of studies conducted in recent years regarding the creation of 3D volumes. Using these 3D data, researchers have begun conducting research on creating 3D segmentation models, such as brain tumor segmentation and classification. Since a higher number of crucial features can be extracted using 3D data than 2D data, 3D brain tumor detection models have increased in popularity among researchers. Until now, various significant research works have focused on the 3D version of the U-Net and other popular models, such as 3D U-Net and V-Net, while doing superior research works. In this study, we used 3D brain image data and created a new architecture based on a 3D U-Net model that uses multiple skip connections with cost-efficient pretrained 3D MobileNetV2 blocks and attention modules. These pretrained MobileNetV2 blocks assist our architecture by providing smaller parameters to maintain operable model size in terms of our computational capability and help the model to converge faster. We added additional skip connections between the encoder and decoder blocks to ease the exchange of extracted features between the two blocks, which resulted in the maximum use of the features. We also used attention modules to filter out irrelevant features coming through the skip connections and, thus, preserved more computational power while achieving improved accuracy.