Cargando…
An open-source deep learning network AVA-Net for arterial-venous area segmentation in optical coherence tomography angiography
BACKGROUND: Differential artery-vein (AV) analysis in optical coherence tomography angiography (OCTA) holds promise for the early detection of eye diseases. However, currently available methods for AV analysis are limited for binary processing of retinal vasculature in OCTA, without quantitative inf...
Autores principales: | , , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Nature Publishing Group UK
2023
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10110614/ https://www.ncbi.nlm.nih.gov/pubmed/37069396 http://dx.doi.org/10.1038/s43856-023-00287-9 |
_version_ | 1785027297888174080 |
---|---|
author | Abtahi, Mansour Le, David Ebrahimi, Behrouz Dadzie, Albert K. Lim, Jennifer I. Yao, Xincheng |
author_facet | Abtahi, Mansour Le, David Ebrahimi, Behrouz Dadzie, Albert K. Lim, Jennifer I. Yao, Xincheng |
author_sort | Abtahi, Mansour |
collection | PubMed |
description | BACKGROUND: Differential artery-vein (AV) analysis in optical coherence tomography angiography (OCTA) holds promise for the early detection of eye diseases. However, currently available methods for AV analysis are limited for binary processing of retinal vasculature in OCTA, without quantitative information of vascular perfusion intensity. This study is to develop and validate a method for quantitative AV analysis of vascular perfusion intensity. METHOD: A deep learning network AVA-Net has been developed for automated AV area (AVA) segmentation in OCTA. Seven new OCTA features, including arterial area (AA), venous area (VA), AVA ratio (AVAR), total perfusion intensity density (T-PID), arterial PID (A-PID), venous PID (V-PID), and arterial-venous PID ratio (AV-PIDR), were extracted and tested for early detection of diabetic retinopathy (DR). Each of these seven features was evaluated for quantitative evaluation of OCTA images from healthy controls, diabetic patients without DR (NoDR), and mild DR. RESULTS: It was observed that the area features, i.e., AA, VA and AVAR, can reveal significant differences between the control and mild DR. Vascular perfusion parameters, including T-PID and A-PID, can differentiate mild DR from control group. AV-PIDR can disclose significant differences among all three groups, i.e., control, NoDR, and mild DR. According to Bonferroni correction, the combination of A-PID and AV-PIDR can reveal significant differences in all three groups. CONCLUSIONS: AVA-Net, which is available on GitHub for open access, enables quantitative AV analysis of AV area and vascular perfusion intensity. Comparative analysis revealed AV-PIDR as the most sensitive feature for OCTA detection of early DR. Ensemble AV feature analysis, e.g., the combination of A-PID and AV-PIDR, can further improve the performance for early DR assessment. |
format | Online Article Text |
id | pubmed-10110614 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2023 |
publisher | Nature Publishing Group UK |
record_format | MEDLINE/PubMed |
spelling | pubmed-101106142023-04-19 An open-source deep learning network AVA-Net for arterial-venous area segmentation in optical coherence tomography angiography Abtahi, Mansour Le, David Ebrahimi, Behrouz Dadzie, Albert K. Lim, Jennifer I. Yao, Xincheng Commun Med (Lond) Article BACKGROUND: Differential artery-vein (AV) analysis in optical coherence tomography angiography (OCTA) holds promise for the early detection of eye diseases. However, currently available methods for AV analysis are limited for binary processing of retinal vasculature in OCTA, without quantitative information of vascular perfusion intensity. This study is to develop and validate a method for quantitative AV analysis of vascular perfusion intensity. METHOD: A deep learning network AVA-Net has been developed for automated AV area (AVA) segmentation in OCTA. Seven new OCTA features, including arterial area (AA), venous area (VA), AVA ratio (AVAR), total perfusion intensity density (T-PID), arterial PID (A-PID), venous PID (V-PID), and arterial-venous PID ratio (AV-PIDR), were extracted and tested for early detection of diabetic retinopathy (DR). Each of these seven features was evaluated for quantitative evaluation of OCTA images from healthy controls, diabetic patients without DR (NoDR), and mild DR. RESULTS: It was observed that the area features, i.e., AA, VA and AVAR, can reveal significant differences between the control and mild DR. Vascular perfusion parameters, including T-PID and A-PID, can differentiate mild DR from control group. AV-PIDR can disclose significant differences among all three groups, i.e., control, NoDR, and mild DR. According to Bonferroni correction, the combination of A-PID and AV-PIDR can reveal significant differences in all three groups. CONCLUSIONS: AVA-Net, which is available on GitHub for open access, enables quantitative AV analysis of AV area and vascular perfusion intensity. Comparative analysis revealed AV-PIDR as the most sensitive feature for OCTA detection of early DR. Ensemble AV feature analysis, e.g., the combination of A-PID and AV-PIDR, can further improve the performance for early DR assessment. Nature Publishing Group UK 2023-04-17 /pmc/articles/PMC10110614/ /pubmed/37069396 http://dx.doi.org/10.1038/s43856-023-00287-9 Text en © The Author(s) 2023 https://creativecommons.org/licenses/by/4.0/Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) . |
spellingShingle | Article Abtahi, Mansour Le, David Ebrahimi, Behrouz Dadzie, Albert K. Lim, Jennifer I. Yao, Xincheng An open-source deep learning network AVA-Net for arterial-venous area segmentation in optical coherence tomography angiography |
title | An open-source deep learning network AVA-Net for arterial-venous area segmentation in optical coherence tomography angiography |
title_full | An open-source deep learning network AVA-Net for arterial-venous area segmentation in optical coherence tomography angiography |
title_fullStr | An open-source deep learning network AVA-Net for arterial-venous area segmentation in optical coherence tomography angiography |
title_full_unstemmed | An open-source deep learning network AVA-Net for arterial-venous area segmentation in optical coherence tomography angiography |
title_short | An open-source deep learning network AVA-Net for arterial-venous area segmentation in optical coherence tomography angiography |
title_sort | open-source deep learning network ava-net for arterial-venous area segmentation in optical coherence tomography angiography |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10110614/ https://www.ncbi.nlm.nih.gov/pubmed/37069396 http://dx.doi.org/10.1038/s43856-023-00287-9 |
work_keys_str_mv | AT abtahimansour anopensourcedeeplearningnetworkavanetforarterialvenousareasegmentationinopticalcoherencetomographyangiography AT ledavid anopensourcedeeplearningnetworkavanetforarterialvenousareasegmentationinopticalcoherencetomographyangiography AT ebrahimibehrouz anopensourcedeeplearningnetworkavanetforarterialvenousareasegmentationinopticalcoherencetomographyangiography AT dadziealbertk anopensourcedeeplearningnetworkavanetforarterialvenousareasegmentationinopticalcoherencetomographyangiography AT limjenniferi anopensourcedeeplearningnetworkavanetforarterialvenousareasegmentationinopticalcoherencetomographyangiography AT yaoxincheng anopensourcedeeplearningnetworkavanetforarterialvenousareasegmentationinopticalcoherencetomographyangiography AT abtahimansour opensourcedeeplearningnetworkavanetforarterialvenousareasegmentationinopticalcoherencetomographyangiography AT ledavid opensourcedeeplearningnetworkavanetforarterialvenousareasegmentationinopticalcoherencetomographyangiography AT ebrahimibehrouz opensourcedeeplearningnetworkavanetforarterialvenousareasegmentationinopticalcoherencetomographyangiography AT dadziealbertk opensourcedeeplearningnetworkavanetforarterialvenousareasegmentationinopticalcoherencetomographyangiography AT limjenniferi opensourcedeeplearningnetworkavanetforarterialvenousareasegmentationinopticalcoherencetomographyangiography AT yaoxincheng opensourcedeeplearningnetworkavanetforarterialvenousareasegmentationinopticalcoherencetomographyangiography |