Cargando…

A Multi-Scale Feature Fusion Method Based on U-Net for Retinal Vessel Segmentation

Computer-aided automatic segmentation of retinal blood vessels plays an important role in the diagnosis of diseases such as diabetes, glaucoma, and macular degeneration. In this paper, we propose a multi-scale feature fusion retinal vessel segmentation model based on U-Net, named MSFFU-Net. The mode...

Descripción completa

Detalles Bibliográficos
Autores principales: Yang, Dan, Liu, Guoru, Ren, Mengcheng, Xu, Bin, Wang, Jiao
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2020
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7517387/
https://www.ncbi.nlm.nih.gov/pubmed/33286584
http://dx.doi.org/10.3390/e22080811
_version_ 1783587218789498880
author Yang, Dan
Liu, Guoru
Ren, Mengcheng
Xu, Bin
Wang, Jiao
author_facet Yang, Dan
Liu, Guoru
Ren, Mengcheng
Xu, Bin
Wang, Jiao
author_sort Yang, Dan
collection PubMed
description Computer-aided automatic segmentation of retinal blood vessels plays an important role in the diagnosis of diseases such as diabetes, glaucoma, and macular degeneration. In this paper, we propose a multi-scale feature fusion retinal vessel segmentation model based on U-Net, named MSFFU-Net. The model introduces the inception structure into the multi-scale feature extraction encoder part, and the max-pooling index is applied during the upsampling process in the feature fusion decoder of an improved network. The skip layer connection is used to transfer each set of feature maps generated on the encoder path to the corresponding feature maps on the decoder path. Moreover, a cost-sensitive loss function based on the Dice coefficient and cross-entropy is designed. Four transformations—rotating, mirroring, shifting and cropping—are used as data augmentation strategies, and the CLAHE algorithm is applied to image preprocessing. The proposed framework is tested and trained on DRIVE and STARE, and sensitivity (Sen), specificity (Spe), accuracy (Acc), and area under curve (AUC) are adopted as the evaluation metrics. Detailed comparisons with U-Net model, at last, it verifies the effectiveness and robustness of the proposed model. The Sen of 0.7762 and 0.7721, Spe of 0.9835 and 0.9885, Acc of 0.9694 and 0.9537 and AUC value of 0.9790 and 0.9680 were achieved on DRIVE and STARE databases, respectively. Results are also compared to other state-of-the-art methods, demonstrating that the performance of the proposed method is superior to that of other methods and showing its competitive results.
format Online
Article
Text
id pubmed-7517387
institution National Center for Biotechnology Information
language English
publishDate 2020
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-75173872020-11-09 A Multi-Scale Feature Fusion Method Based on U-Net for Retinal Vessel Segmentation Yang, Dan Liu, Guoru Ren, Mengcheng Xu, Bin Wang, Jiao Entropy (Basel) Article Computer-aided automatic segmentation of retinal blood vessels plays an important role in the diagnosis of diseases such as diabetes, glaucoma, and macular degeneration. In this paper, we propose a multi-scale feature fusion retinal vessel segmentation model based on U-Net, named MSFFU-Net. The model introduces the inception structure into the multi-scale feature extraction encoder part, and the max-pooling index is applied during the upsampling process in the feature fusion decoder of an improved network. The skip layer connection is used to transfer each set of feature maps generated on the encoder path to the corresponding feature maps on the decoder path. Moreover, a cost-sensitive loss function based on the Dice coefficient and cross-entropy is designed. Four transformations—rotating, mirroring, shifting and cropping—are used as data augmentation strategies, and the CLAHE algorithm is applied to image preprocessing. The proposed framework is tested and trained on DRIVE and STARE, and sensitivity (Sen), specificity (Spe), accuracy (Acc), and area under curve (AUC) are adopted as the evaluation metrics. Detailed comparisons with U-Net model, at last, it verifies the effectiveness and robustness of the proposed model. The Sen of 0.7762 and 0.7721, Spe of 0.9835 and 0.9885, Acc of 0.9694 and 0.9537 and AUC value of 0.9790 and 0.9680 were achieved on DRIVE and STARE databases, respectively. Results are also compared to other state-of-the-art methods, demonstrating that the performance of the proposed method is superior to that of other methods and showing its competitive results. MDPI 2020-07-24 /pmc/articles/PMC7517387/ /pubmed/33286584 http://dx.doi.org/10.3390/e22080811 Text en © 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Yang, Dan
Liu, Guoru
Ren, Mengcheng
Xu, Bin
Wang, Jiao
A Multi-Scale Feature Fusion Method Based on U-Net for Retinal Vessel Segmentation
title A Multi-Scale Feature Fusion Method Based on U-Net for Retinal Vessel Segmentation
title_full A Multi-Scale Feature Fusion Method Based on U-Net for Retinal Vessel Segmentation
title_fullStr A Multi-Scale Feature Fusion Method Based on U-Net for Retinal Vessel Segmentation
title_full_unstemmed A Multi-Scale Feature Fusion Method Based on U-Net for Retinal Vessel Segmentation
title_short A Multi-Scale Feature Fusion Method Based on U-Net for Retinal Vessel Segmentation
title_sort multi-scale feature fusion method based on u-net for retinal vessel segmentation
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7517387/
https://www.ncbi.nlm.nih.gov/pubmed/33286584
http://dx.doi.org/10.3390/e22080811
work_keys_str_mv AT yangdan amultiscalefeaturefusionmethodbasedonunetforretinalvesselsegmentation
AT liuguoru amultiscalefeaturefusionmethodbasedonunetforretinalvesselsegmentation
AT renmengcheng amultiscalefeaturefusionmethodbasedonunetforretinalvesselsegmentation
AT xubin amultiscalefeaturefusionmethodbasedonunetforretinalvesselsegmentation
AT wangjiao amultiscalefeaturefusionmethodbasedonunetforretinalvesselsegmentation
AT yangdan multiscalefeaturefusionmethodbasedonunetforretinalvesselsegmentation
AT liuguoru multiscalefeaturefusionmethodbasedonunetforretinalvesselsegmentation
AT renmengcheng multiscalefeaturefusionmethodbasedonunetforretinalvesselsegmentation
AT xubin multiscalefeaturefusionmethodbasedonunetforretinalvesselsegmentation
AT wangjiao multiscalefeaturefusionmethodbasedonunetforretinalvesselsegmentation