Cargando…
Multi-modality self-attention aware deep network for 3D biomedical segmentation
BACKGROUND: Deep learning based on segmentation models have been gradually applied in biomedical images and achieved state-of-the-art performance for 3D biomedical segmentation. However, most of existing biomedical segmentation researches take account of the application cases with adapting a single...
Autores principales: | Jia, Xibin, Liu, Yunfeng, Yang, Zhenghan, Yang, Dawei |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
BioMed Central
2020
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7346322/ https://www.ncbi.nlm.nih.gov/pubmed/32646419 http://dx.doi.org/10.1186/s12911-020-1109-0 |
Ejemplares similares
-
FusionAtt: Deep Fusional Attention Networks for Multi-Channel Biomedical Signals
por: Yuan, Ye, et al.
Publicado: (2019) -
CMANet: Cross-Modality Attention Network for Indoor-Scene Semantic Segmentation
por: Zhu, Longze, et al.
Publicado: (2022) -
Can a proposed double branch multimodality-contribution-aware TripNet improve the prediction performance of the microvascular invasion of hepatocellular carcinoma based on small samples?
por: Deng, Yuhui, et al.
Publicado: (2022) -
Axial Attention Convolutional Neural Network for Brain Tumor Segmentation with Multi-Modality MRI Scans
por: Tian, Weiwei, et al.
Publicado: (2022) -
Automated Multi-View Multi-Modal Assessment of COVID-19 Patients Using Reciprocal Attention and Biomedical Transform
por: Li, Yanhan, et al.
Publicado: (2022)