Cargando…

TCU-Net: Transformer Embedded in Convolutional U-Shaped Network for Retinal Vessel Segmentation

Optical coherence tomography angiography (OCTA) provides a detailed visualization of the vascular system to aid in the detection and diagnosis of ophthalmic disease. However, accurately extracting microvascular details from OCTA images remains a challenging task due to the limitations of pure convol...

Descripción completa

Detalles Bibliográficos
Autores principales: Shi, Zidi, Li, Yu, Zou, Hua, Zhang, Xuedong
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10223195/
https://www.ncbi.nlm.nih.gov/pubmed/37430810
http://dx.doi.org/10.3390/s23104897
_version_ 1785049883461287936
author Shi, Zidi
Li, Yu
Zou, Hua
Zhang, Xuedong
author_facet Shi, Zidi
Li, Yu
Zou, Hua
Zhang, Xuedong
author_sort Shi, Zidi
collection PubMed
description Optical coherence tomography angiography (OCTA) provides a detailed visualization of the vascular system to aid in the detection and diagnosis of ophthalmic disease. However, accurately extracting microvascular details from OCTA images remains a challenging task due to the limitations of pure convolutional networks. We propose a novel end-to-end transformer-based network architecture called TCU-Net for OCTA retinal vessel segmentation tasks. To address the loss of vascular features of convolutional operations, an efficient cross-fusion transformer module is introduced to replace the original skip connection of U-Net. The transformer module interacts with the encoder’s multiscale vascular features to enrich vascular information and achieve linear computational complexity. Additionally, we design an efficient channel-wise cross attention module to fuse the multiscale features and fine-grained details from the decoding stages, resolving the semantic bias between them and enhancing effective vascular information. This model has been evaluated on the dedicated Retinal OCTA Segmentation (ROSE) dataset. The accuracy values of TCU-Net tested on the ROSE-1 dataset with SVC, DVC, and SVC+DVC are 0.9230, 0.9912, and 0.9042, respectively, and the corresponding AUC values are 0.9512, 0.9823, and 0.9170. For the ROSE-2 dataset, the accuracy and AUC are 0.9454 and 0.8623, respectively. The experiments demonstrate that TCU-Net outperforms state-of-the-art approaches regarding vessel segmentation performance and robustness.
format Online
Article
Text
id pubmed-10223195
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-102231952023-05-28 TCU-Net: Transformer Embedded in Convolutional U-Shaped Network for Retinal Vessel Segmentation Shi, Zidi Li, Yu Zou, Hua Zhang, Xuedong Sensors (Basel) Article Optical coherence tomography angiography (OCTA) provides a detailed visualization of the vascular system to aid in the detection and diagnosis of ophthalmic disease. However, accurately extracting microvascular details from OCTA images remains a challenging task due to the limitations of pure convolutional networks. We propose a novel end-to-end transformer-based network architecture called TCU-Net for OCTA retinal vessel segmentation tasks. To address the loss of vascular features of convolutional operations, an efficient cross-fusion transformer module is introduced to replace the original skip connection of U-Net. The transformer module interacts with the encoder’s multiscale vascular features to enrich vascular information and achieve linear computational complexity. Additionally, we design an efficient channel-wise cross attention module to fuse the multiscale features and fine-grained details from the decoding stages, resolving the semantic bias between them and enhancing effective vascular information. This model has been evaluated on the dedicated Retinal OCTA Segmentation (ROSE) dataset. The accuracy values of TCU-Net tested on the ROSE-1 dataset with SVC, DVC, and SVC+DVC are 0.9230, 0.9912, and 0.9042, respectively, and the corresponding AUC values are 0.9512, 0.9823, and 0.9170. For the ROSE-2 dataset, the accuracy and AUC are 0.9454 and 0.8623, respectively. The experiments demonstrate that TCU-Net outperforms state-of-the-art approaches regarding vessel segmentation performance and robustness. MDPI 2023-05-19 /pmc/articles/PMC10223195/ /pubmed/37430810 http://dx.doi.org/10.3390/s23104897 Text en © 2023 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Shi, Zidi
Li, Yu
Zou, Hua
Zhang, Xuedong
TCU-Net: Transformer Embedded in Convolutional U-Shaped Network for Retinal Vessel Segmentation
title TCU-Net: Transformer Embedded in Convolutional U-Shaped Network for Retinal Vessel Segmentation
title_full TCU-Net: Transformer Embedded in Convolutional U-Shaped Network for Retinal Vessel Segmentation
title_fullStr TCU-Net: Transformer Embedded in Convolutional U-Shaped Network for Retinal Vessel Segmentation
title_full_unstemmed TCU-Net: Transformer Embedded in Convolutional U-Shaped Network for Retinal Vessel Segmentation
title_short TCU-Net: Transformer Embedded in Convolutional U-Shaped Network for Retinal Vessel Segmentation
title_sort tcu-net: transformer embedded in convolutional u-shaped network for retinal vessel segmentation
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10223195/
https://www.ncbi.nlm.nih.gov/pubmed/37430810
http://dx.doi.org/10.3390/s23104897
work_keys_str_mv AT shizidi tcunettransformerembeddedinconvolutionalushapednetworkforretinalvesselsegmentation
AT liyu tcunettransformerembeddedinconvolutionalushapednetworkforretinalvesselsegmentation
AT zouhua tcunettransformerembeddedinconvolutionalushapednetworkforretinalvesselsegmentation
AT zhangxuedong tcunettransformerembeddedinconvolutionalushapednetworkforretinalvesselsegmentation