Cargando…

An Artificial-Intelligence–Based Automated Grading and Lesions Segmentation System for Myopic Maculopathy Based on Color Fundus Photographs

PURPOSE: To develop deep learning models based on color fundus photographs that can automatically grade myopic maculopathy, diagnose pathologic myopia, and identify and segment myopia-related lesions. METHODS: Photographs were graded and annotated by four ophthalmologists and were then divided into...

Descripción completa

Detalles Bibliográficos
Autores principales: Tang, Jia, Yuan, Mingzhen, Tian, Kaibin, Wang, Yuelin, Wang, Dongyue, Yang, Jingyuan, Yang, Zhikun, He, Xixi, Luo, Yan, Li, Ying, Xu, Jie, Li, Xirong, Ding, Dayong, Ren, Yanhan, Chen, Youxin, Sadda, Srinivas R., Yu, Weihong
Formato: Online Artículo Texto
Lenguaje:English
Publicado: The Association for Research in Vision and Ophthalmology 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9206390/
https://www.ncbi.nlm.nih.gov/pubmed/35704327
http://dx.doi.org/10.1167/tvst.11.6.16
Descripción
Sumario:PURPOSE: To develop deep learning models based on color fundus photographs that can automatically grade myopic maculopathy, diagnose pathologic myopia, and identify and segment myopia-related lesions. METHODS: Photographs were graded and annotated by four ophthalmologists and were then divided into a high-consistency subgroup or a low-consistency subgroup according to the consistency between the results of the graders. ResNet-50 network was used to develop the classification model, and DeepLabv3+ network was used to develop the segmentation model for lesion identification. The two models were then combined to develop the classification-and-segmentation–based co-decision model. RESULTS: This study included 1395 color fundus photographs from 895 patients. The grading accuracy of the co-decision model was 0.9370, and the quadratic-weighted κ coefficient was 0.9651; the co-decision model achieved an area under the receiver operating characteristic curve of 0.9980 in diagnosing pathologic myopia. The photograph-level F(1) values of the segmentation model identifying optic disc, peripapillary atrophy, diffuse atrophy, patchy atrophy, and macular atrophy were all >0.95; the pixel-level F(1) values for segmenting optic disc and peripapillary atrophy were both >0.9; the pixel-level F(1) values for segmenting diffuse atrophy, patchy atrophy, and macular atrophy were all >0.8; and the photograph-level recall/sensitivity for detecting lacquer cracks was 0.9230. CONCLUSIONS: The models could accurately and automatically grade myopic maculopathy, diagnose pathologic myopia, and identify and monitor progression of the lesions. TRANSLATIONAL RELEVANCE: The models can potentially help with the diagnosis, screening, and follow-up for pathologic myopic in clinical practice.