Cargando…
TiM-Net: Transformer in M-Net for Retinal Vessel Segmentation
retinal image is a crucial window for the clinical observation of cardiovascular, cerebrovascular, or other correlated diseases. Retinal vessel segmentation is of great benefit to the clinical diagnosis. Recently, the convolutional neural network (CNN) has become a dominant method in the retinal ves...
Autores principales: | , , , , , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Hindawi
2022
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9293566/ https://www.ncbi.nlm.nih.gov/pubmed/35859930 http://dx.doi.org/10.1155/2022/9016401 |
Sumario: | retinal image is a crucial window for the clinical observation of cardiovascular, cerebrovascular, or other correlated diseases. Retinal vessel segmentation is of great benefit to the clinical diagnosis. Recently, the convolutional neural network (CNN) has become a dominant method in the retinal vessel segmentation field, especially the U-shaped CNN models. However, the conventional encoder in CNN is vulnerable to noisy interference, and the long-rang relationship in fundus images has not been fully utilized. In this paper, we propose a novel model called Transformer in M-Net (TiM-Net) based on M-Net, diverse attention mechanisms, and weighted side output layers to efficaciously perform retinal vessel segmentation. First, to alleviate the effects of noise, a dual-attention mechanism based on channel and spatial is designed. Then the self-attention mechanism in Transformer is introduced into skip connection to re-encode features and model the long-range relationship explicitly. Finally, a weighted SideOut layer is proposed for better utilization of the features from each side layer. Extensive experiments are conducted on three public data sets to show the effectiveness and robustness of our TiM-Net compared with the state-of-the-art baselines. Both quantitative and qualitative results prove its clinical practicality. Moreover, variants of TiM-Net also achieve competitive performance, demonstrating its scalability and generalization ability. The code of our model is available at https://github.com/ZX-ECJTU/TiM-Net. |
---|