Cargando…

A High-Resolution Network with Strip Attention for Retinal Vessel Segmentation

Accurate segmentation of retinal vessels is an essential prerequisite for the subsequent analysis of fundus images. Recently, a number of methods based on deep learning have been proposed and shown to demonstrate promising segmentation performance, especially U-Net and its variants. However, tiny ve...

Descripción completa

Detalles Bibliográficos
Autores principales: Ye, Zhipin, Liu, Yingqian, Jing, Teng, He, Zhaoming, Zhou, Ling
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10650600/
https://www.ncbi.nlm.nih.gov/pubmed/37960597
http://dx.doi.org/10.3390/s23218899
_version_ 1785135817732128768
author Ye, Zhipin
Liu, Yingqian
Jing, Teng
He, Zhaoming
Zhou, Ling
author_facet Ye, Zhipin
Liu, Yingqian
Jing, Teng
He, Zhaoming
Zhou, Ling
author_sort Ye, Zhipin
collection PubMed
description Accurate segmentation of retinal vessels is an essential prerequisite for the subsequent analysis of fundus images. Recently, a number of methods based on deep learning have been proposed and shown to demonstrate promising segmentation performance, especially U-Net and its variants. However, tiny vessels and low-contrast vessels are hard to detect due to the issues of a loss of spatial details caused by consecutive down-sample operations and inadequate fusion of multi-level features caused by vanilla skip connections. To address these issues and enhance the segmentation precision of retinal vessels, we propose a novel high-resolution network with strip attention. Instead of the U-Net-shaped architecture, the proposed network follows an HRNet-shaped architecture as the basic network, learning high-resolution representations throughout the training process. In addition, a strip attention module including a horizontal attention mechanism and a vertical attention mechanism is designed to obtain long-range dependencies in the horizontal and vertical directions by calculating the similarity between each pixel and all pixels in the same row and the same column, respectively. For effective multi-layer feature fusion, we incorporate the strip attention module into the basic network to dynamically guide adjacent hierarchical features. Experimental results on the DRIVE and STARE datasets show that the proposed method can extract more tiny vessels and low-contrast vessels compared with existing mainstream methods, achieving accuracies of 96.16% and 97.08% and sensitivities of 82.68% and 89.36%, respectively. The proposed method has the potential to aid in the analysis of fundus images.
format Online
Article
Text
id pubmed-10650600
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-106506002023-11-01 A High-Resolution Network with Strip Attention for Retinal Vessel Segmentation Ye, Zhipin Liu, Yingqian Jing, Teng He, Zhaoming Zhou, Ling Sensors (Basel) Article Accurate segmentation of retinal vessels is an essential prerequisite for the subsequent analysis of fundus images. Recently, a number of methods based on deep learning have been proposed and shown to demonstrate promising segmentation performance, especially U-Net and its variants. However, tiny vessels and low-contrast vessels are hard to detect due to the issues of a loss of spatial details caused by consecutive down-sample operations and inadequate fusion of multi-level features caused by vanilla skip connections. To address these issues and enhance the segmentation precision of retinal vessels, we propose a novel high-resolution network with strip attention. Instead of the U-Net-shaped architecture, the proposed network follows an HRNet-shaped architecture as the basic network, learning high-resolution representations throughout the training process. In addition, a strip attention module including a horizontal attention mechanism and a vertical attention mechanism is designed to obtain long-range dependencies in the horizontal and vertical directions by calculating the similarity between each pixel and all pixels in the same row and the same column, respectively. For effective multi-layer feature fusion, we incorporate the strip attention module into the basic network to dynamically guide adjacent hierarchical features. Experimental results on the DRIVE and STARE datasets show that the proposed method can extract more tiny vessels and low-contrast vessels compared with existing mainstream methods, achieving accuracies of 96.16% and 97.08% and sensitivities of 82.68% and 89.36%, respectively. The proposed method has the potential to aid in the analysis of fundus images. MDPI 2023-11-01 /pmc/articles/PMC10650600/ /pubmed/37960597 http://dx.doi.org/10.3390/s23218899 Text en © 2023 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Ye, Zhipin
Liu, Yingqian
Jing, Teng
He, Zhaoming
Zhou, Ling
A High-Resolution Network with Strip Attention for Retinal Vessel Segmentation
title A High-Resolution Network with Strip Attention for Retinal Vessel Segmentation
title_full A High-Resolution Network with Strip Attention for Retinal Vessel Segmentation
title_fullStr A High-Resolution Network with Strip Attention for Retinal Vessel Segmentation
title_full_unstemmed A High-Resolution Network with Strip Attention for Retinal Vessel Segmentation
title_short A High-Resolution Network with Strip Attention for Retinal Vessel Segmentation
title_sort high-resolution network with strip attention for retinal vessel segmentation
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10650600/
https://www.ncbi.nlm.nih.gov/pubmed/37960597
http://dx.doi.org/10.3390/s23218899
work_keys_str_mv AT yezhipin ahighresolutionnetworkwithstripattentionforretinalvesselsegmentation
AT liuyingqian ahighresolutionnetworkwithstripattentionforretinalvesselsegmentation
AT jingteng ahighresolutionnetworkwithstripattentionforretinalvesselsegmentation
AT hezhaoming ahighresolutionnetworkwithstripattentionforretinalvesselsegmentation
AT zhouling ahighresolutionnetworkwithstripattentionforretinalvesselsegmentation
AT yezhipin highresolutionnetworkwithstripattentionforretinalvesselsegmentation
AT liuyingqian highresolutionnetworkwithstripattentionforretinalvesselsegmentation
AT jingteng highresolutionnetworkwithstripattentionforretinalvesselsegmentation
AT hezhaoming highresolutionnetworkwithstripattentionforretinalvesselsegmentation
AT zhouling highresolutionnetworkwithstripattentionforretinalvesselsegmentation