Cargando…
RFARN: Retinal vessel segmentation based on reverse fusion attention residual network
Accurate segmentation of retinal vessels is critical to the mechanism, diagnosis, and treatment of many ocular pathologies. Due to the poor contrast and inhomogeneous background of fundus imaging and the complex structure of retinal fundus images, this makes accurate segmentation of blood vessels fr...
Autores principales: | , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Public Library of Science
2021
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8641866/ https://www.ncbi.nlm.nih.gov/pubmed/34860847 http://dx.doi.org/10.1371/journal.pone.0257256 |
_version_ | 1784609572117282816 |
---|---|
author | Liu, Wenhuan Jiang, Yun Zhang, Jingyao Ma, Zeqi |
author_facet | Liu, Wenhuan Jiang, Yun Zhang, Jingyao Ma, Zeqi |
author_sort | Liu, Wenhuan |
collection | PubMed |
description | Accurate segmentation of retinal vessels is critical to the mechanism, diagnosis, and treatment of many ocular pathologies. Due to the poor contrast and inhomogeneous background of fundus imaging and the complex structure of retinal fundus images, this makes accurate segmentation of blood vessels from retinal images still challenging. In this paper, we propose an effective framework for retinal vascular segmentation, which is innovative mainly in the retinal image pre-processing stage and segmentation stage. First, we perform image enhancement on three publicly available fundus datasets based on the multiscale retinex with color restoration (MSRCR) method, which effectively suppresses noise and highlights the vessel structure creating a good basis for the segmentation phase. The processed fundus images are then fed into an effective Reverse Fusion Attention Residual Network (RFARN) for training to achieve more accurate retinal vessel segmentation. In the RFARN, we use Reverse Channel Attention Module (RCAM) and Reverse Spatial Attention Module (RSAM) to highlight the shallow details of the channel and spatial dimensions. And RCAM and RSAM are used to achieve effective fusion of deep local features with shallow global features to ensure the continuity and integrity of the segmented vessels. In the experimental results for the DRIVE, STARE and CHASE datasets, the evaluation metrics were 0.9712, 0.9822 and 0.9780 for accuracy (Acc), 0.8788, 0.8874 and 0.8352 for sensitivity (Se), 0.9803, 0.9891 and 0.9890 for specificity (Sp), area under the ROC curve(AUC) was 0.9910, 0.9952 and 0.9904, and the F1-Score was 0.8453, 0.8707 and 0.8185. In comparison with existing retinal image segmentation methods, e.g. UNet, R2UNet, DUNet, HAnet, Sine-Net, FANet, etc., our method in three fundus datasets achieved better vessel segmentation performance and results. |
format | Online Article Text |
id | pubmed-8641866 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2021 |
publisher | Public Library of Science |
record_format | MEDLINE/PubMed |
spelling | pubmed-86418662021-12-04 RFARN: Retinal vessel segmentation based on reverse fusion attention residual network Liu, Wenhuan Jiang, Yun Zhang, Jingyao Ma, Zeqi PLoS One Research Article Accurate segmentation of retinal vessels is critical to the mechanism, diagnosis, and treatment of many ocular pathologies. Due to the poor contrast and inhomogeneous background of fundus imaging and the complex structure of retinal fundus images, this makes accurate segmentation of blood vessels from retinal images still challenging. In this paper, we propose an effective framework for retinal vascular segmentation, which is innovative mainly in the retinal image pre-processing stage and segmentation stage. First, we perform image enhancement on three publicly available fundus datasets based on the multiscale retinex with color restoration (MSRCR) method, which effectively suppresses noise and highlights the vessel structure creating a good basis for the segmentation phase. The processed fundus images are then fed into an effective Reverse Fusion Attention Residual Network (RFARN) for training to achieve more accurate retinal vessel segmentation. In the RFARN, we use Reverse Channel Attention Module (RCAM) and Reverse Spatial Attention Module (RSAM) to highlight the shallow details of the channel and spatial dimensions. And RCAM and RSAM are used to achieve effective fusion of deep local features with shallow global features to ensure the continuity and integrity of the segmented vessels. In the experimental results for the DRIVE, STARE and CHASE datasets, the evaluation metrics were 0.9712, 0.9822 and 0.9780 for accuracy (Acc), 0.8788, 0.8874 and 0.8352 for sensitivity (Se), 0.9803, 0.9891 and 0.9890 for specificity (Sp), area under the ROC curve(AUC) was 0.9910, 0.9952 and 0.9904, and the F1-Score was 0.8453, 0.8707 and 0.8185. In comparison with existing retinal image segmentation methods, e.g. UNet, R2UNet, DUNet, HAnet, Sine-Net, FANet, etc., our method in three fundus datasets achieved better vessel segmentation performance and results. Public Library of Science 2021-12-03 /pmc/articles/PMC8641866/ /pubmed/34860847 http://dx.doi.org/10.1371/journal.pone.0257256 Text en © 2021 Liu et al https://creativecommons.org/licenses/by/4.0/This is an open access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/) , which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. |
spellingShingle | Research Article Liu, Wenhuan Jiang, Yun Zhang, Jingyao Ma, Zeqi RFARN: Retinal vessel segmentation based on reverse fusion attention residual network |
title | RFARN: Retinal vessel segmentation based on reverse fusion attention residual network |
title_full | RFARN: Retinal vessel segmentation based on reverse fusion attention residual network |
title_fullStr | RFARN: Retinal vessel segmentation based on reverse fusion attention residual network |
title_full_unstemmed | RFARN: Retinal vessel segmentation based on reverse fusion attention residual network |
title_short | RFARN: Retinal vessel segmentation based on reverse fusion attention residual network |
title_sort | rfarn: retinal vessel segmentation based on reverse fusion attention residual network |
topic | Research Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8641866/ https://www.ncbi.nlm.nih.gov/pubmed/34860847 http://dx.doi.org/10.1371/journal.pone.0257256 |
work_keys_str_mv | AT liuwenhuan rfarnretinalvesselsegmentationbasedonreversefusionattentionresidualnetwork AT jiangyun rfarnretinalvesselsegmentationbasedonreversefusionattentionresidualnetwork AT zhangjingyao rfarnretinalvesselsegmentationbasedonreversefusionattentionresidualnetwork AT mazeqi rfarnretinalvesselsegmentationbasedonreversefusionattentionresidualnetwork |