Cargando…
Deep Learning-based Quantification of Anterior Segment OCT Parameters
OBJECTIVE: To develop and validate a deep learning algorithm that could automate the annotation of scleral spur (SS) and segmentation of anterior chamber (AC) structures for measurements of AC, iris, and angle width parameters in anterior segment OCT (ASOCT) scans. DESIGN: Cross-sectional study. SUB...
Autores principales: | , , , , , , , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Elsevier
2023
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10587633/ https://www.ncbi.nlm.nih.gov/pubmed/37869016 http://dx.doi.org/10.1016/j.xops.2023.100360 |
_version_ | 1785123410526863360 |
---|---|
author | Soh, Zhi Da Tan, Mingrui Nongpiur, Monisha Esther Yu, Marco Qian, Chaoxu Tham, Yih Chung Koh, Victor Aung, Tin Xu, Xinxing Liu, Yong Cheng, Ching-Yu |
author_facet | Soh, Zhi Da Tan, Mingrui Nongpiur, Monisha Esther Yu, Marco Qian, Chaoxu Tham, Yih Chung Koh, Victor Aung, Tin Xu, Xinxing Liu, Yong Cheng, Ching-Yu |
author_sort | Soh, Zhi Da |
collection | PubMed |
description | OBJECTIVE: To develop and validate a deep learning algorithm that could automate the annotation of scleral spur (SS) and segmentation of anterior chamber (AC) structures for measurements of AC, iris, and angle width parameters in anterior segment OCT (ASOCT) scans. DESIGN: Cross-sectional study. SUBJECTS: Data from 2 population-based studies (i.e., the Singapore Chinese Eye Study and Singapore Malay Eye Study) and 1 clinical study on angle-closure disease were included in algorithm development. A separate clinical study on angle-closure disease was used for external validation. METHOD: Image contrast of ASOCT scans were first enhanced with CycleGAN. We utilized a heat map regression approach with coarse-to-fine framework for SS annotation. Then, an ensemble network of U-Net, full resolution residual network, and full resolution U-Net was used for structure segmentation. Measurements obtained from predicted SSs and structure segmentation were measured and compared with measurements obtained from manual SS annotation and structure segmentation (i.e., ground truth). MAIN OUTCOME MEASURES: We measured Euclidean distance and intraclass correlation coefficients (ICC) to evaluate SS annotation and Dice similarity coefficient for structure segmentation. The ICC, Bland–Altman plot, and repeatability coefficient were used to evaluate agreement and precision of measurements. RESULTS: For SS annotation, our algorithm achieved a Euclidean distance of 124.7 μm, ICC ≥ 0.95, and a 3.3% error rate. For structure segmentation, we obtained Dice similarity coefficient ≥ 0.91 for cornea, iris, and AC segmentation. For angle width measurements, ≥ 95% of data points were within the 95% limits-of-agreement in Bland–Altman plot with insignificant systematic bias (all P > 0.12). The ICC ranged from 0.71–0.87 for angle width measurements, 0.54 for IT750, 0.83–0.85 for other iris measurements, and 0.89–0.99 for AC measurements. Using the same SS coordinates from a human expert, measurements obtained from our algorithm were generally less variable than measurements obtained from a semiautomated angle assessment program. CONCLUSION: We developed a deep learning algorithm that could automate SS annotation and structure segmentation in ASOCT scans like human experts, in both open-angle and angle-closure eyes. This algorithm reduces the time needed and subjectivity in obtaining ASOCT measurements. FINANCIAL DISCLOSURE(S): The author(s) have no proprietary or commercial interest in any materials discussed in this article. |
format | Online Article Text |
id | pubmed-10587633 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2023 |
publisher | Elsevier |
record_format | MEDLINE/PubMed |
spelling | pubmed-105876332023-10-21 Deep Learning-based Quantification of Anterior Segment OCT Parameters Soh, Zhi Da Tan, Mingrui Nongpiur, Monisha Esther Yu, Marco Qian, Chaoxu Tham, Yih Chung Koh, Victor Aung, Tin Xu, Xinxing Liu, Yong Cheng, Ching-Yu Ophthalmol Sci Original Articles OBJECTIVE: To develop and validate a deep learning algorithm that could automate the annotation of scleral spur (SS) and segmentation of anterior chamber (AC) structures for measurements of AC, iris, and angle width parameters in anterior segment OCT (ASOCT) scans. DESIGN: Cross-sectional study. SUBJECTS: Data from 2 population-based studies (i.e., the Singapore Chinese Eye Study and Singapore Malay Eye Study) and 1 clinical study on angle-closure disease were included in algorithm development. A separate clinical study on angle-closure disease was used for external validation. METHOD: Image contrast of ASOCT scans were first enhanced with CycleGAN. We utilized a heat map regression approach with coarse-to-fine framework for SS annotation. Then, an ensemble network of U-Net, full resolution residual network, and full resolution U-Net was used for structure segmentation. Measurements obtained from predicted SSs and structure segmentation were measured and compared with measurements obtained from manual SS annotation and structure segmentation (i.e., ground truth). MAIN OUTCOME MEASURES: We measured Euclidean distance and intraclass correlation coefficients (ICC) to evaluate SS annotation and Dice similarity coefficient for structure segmentation. The ICC, Bland–Altman plot, and repeatability coefficient were used to evaluate agreement and precision of measurements. RESULTS: For SS annotation, our algorithm achieved a Euclidean distance of 124.7 μm, ICC ≥ 0.95, and a 3.3% error rate. For structure segmentation, we obtained Dice similarity coefficient ≥ 0.91 for cornea, iris, and AC segmentation. For angle width measurements, ≥ 95% of data points were within the 95% limits-of-agreement in Bland–Altman plot with insignificant systematic bias (all P > 0.12). The ICC ranged from 0.71–0.87 for angle width measurements, 0.54 for IT750, 0.83–0.85 for other iris measurements, and 0.89–0.99 for AC measurements. Using the same SS coordinates from a human expert, measurements obtained from our algorithm were generally less variable than measurements obtained from a semiautomated angle assessment program. CONCLUSION: We developed a deep learning algorithm that could automate SS annotation and structure segmentation in ASOCT scans like human experts, in both open-angle and angle-closure eyes. This algorithm reduces the time needed and subjectivity in obtaining ASOCT measurements. FINANCIAL DISCLOSURE(S): The author(s) have no proprietary or commercial interest in any materials discussed in this article. Elsevier 2023-07-03 /pmc/articles/PMC10587633/ /pubmed/37869016 http://dx.doi.org/10.1016/j.xops.2023.100360 Text en © 2023 Published by Elsevier Inc. on behalf of the American Academy of Ophthalmology. https://creativecommons.org/licenses/by-nc-nd/4.0/This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/). |
spellingShingle | Original Articles Soh, Zhi Da Tan, Mingrui Nongpiur, Monisha Esther Yu, Marco Qian, Chaoxu Tham, Yih Chung Koh, Victor Aung, Tin Xu, Xinxing Liu, Yong Cheng, Ching-Yu Deep Learning-based Quantification of Anterior Segment OCT Parameters |
title | Deep Learning-based Quantification of Anterior Segment OCT Parameters |
title_full | Deep Learning-based Quantification of Anterior Segment OCT Parameters |
title_fullStr | Deep Learning-based Quantification of Anterior Segment OCT Parameters |
title_full_unstemmed | Deep Learning-based Quantification of Anterior Segment OCT Parameters |
title_short | Deep Learning-based Quantification of Anterior Segment OCT Parameters |
title_sort | deep learning-based quantification of anterior segment oct parameters |
topic | Original Articles |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10587633/ https://www.ncbi.nlm.nih.gov/pubmed/37869016 http://dx.doi.org/10.1016/j.xops.2023.100360 |
work_keys_str_mv | AT sohzhida deeplearningbasedquantificationofanteriorsegmentoctparameters AT tanmingrui deeplearningbasedquantificationofanteriorsegmentoctparameters AT nongpiurmonishaesther deeplearningbasedquantificationofanteriorsegmentoctparameters AT yumarco deeplearningbasedquantificationofanteriorsegmentoctparameters AT qianchaoxu deeplearningbasedquantificationofanteriorsegmentoctparameters AT thamyihchung deeplearningbasedquantificationofanteriorsegmentoctparameters AT kohvictor deeplearningbasedquantificationofanteriorsegmentoctparameters AT aungtin deeplearningbasedquantificationofanteriorsegmentoctparameters AT xuxinxing deeplearningbasedquantificationofanteriorsegmentoctparameters AT liuyong deeplearningbasedquantificationofanteriorsegmentoctparameters AT chengchingyu deeplearningbasedquantificationofanteriorsegmentoctparameters |