Cargando…
CPMF-Net: Multi-Feature Network Based on Collaborative Patches for Retinal Vessel Segmentation
As an important basis of clinical diagnosis, the morphology of retinal vessels is very useful for the early diagnosis of some eye diseases. In recent years, with the rapid development of deep learning technology, automatic segmentation methods based on it have made considerable progresses in the fie...
Autores principales: | , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
MDPI
2022
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9736046/ https://www.ncbi.nlm.nih.gov/pubmed/36501911 http://dx.doi.org/10.3390/s22239210 |
Sumario: | As an important basis of clinical diagnosis, the morphology of retinal vessels is very useful for the early diagnosis of some eye diseases. In recent years, with the rapid development of deep learning technology, automatic segmentation methods based on it have made considerable progresses in the field of retinal blood vessel segmentation. However, due to the complexity of vessel structure and the poor quality of some images, retinal vessel segmentation, especially the segmentation of Capillaries, is still a challenging task. In this work, we propose a new retinal blood vessel segmentation method, called multi-feature segmentation, based on collaborative patches. First, we design a new collaborative patch training method which effectively compensates for the pixel information loss in the patch extraction through information transmission between collaborative patches. Additionally, the collaborative patch training strategy can simultaneously have the characteristics of low occupancy, easy structure and high accuracy. Then, we design a multi-feature network to gather a variety of information features. The hierarchical network structure, together with the integration of the adaptive coordinate attention module and the gated self-attention module, enables these rich information features to be used for segmentation. Finally, we evaluate the proposed method on two public datasets, namely DRIVE and STARE, and compare the results of our method with those of other nine advanced methods. The results show that our method outperforms other existing methods. |
---|