Cargando…

Leveraging Multimodal Deep Learning Architecture with Retina Lesion Information to Detect Diabetic Retinopathy

PURPOSE: To improve disease severity classification from fundus images using a hybrid architecture with symptom awareness for diabetic retinopathy (DR). METHODS: We used 26,699 fundus images of 17,834 diabetic patients from three Taiwanese hospitals collected in 2007 to 2018 for DR severity classifi...

Descripción completa

Detalles Bibliográficos
Autores principales: Tseng, Vincent S., Chen, Ching-Long, Liang, Chang-Min, Tai, Ming-Cheng, Liu, Jung-Tzu, Wu, Po-Yi, Deng, Ming-Shan, Lee, Ya-Wen, Huang, Teng-Yi, Chen, Yi-Hao
Formato: Online Artículo Texto
Lenguaje:English
Publicado: The Association for Research in Vision and Ophthalmology 2020
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7424907/
https://www.ncbi.nlm.nih.gov/pubmed/32855845
http://dx.doi.org/10.1167/tvst.9.2.41
Descripción
Sumario:PURPOSE: To improve disease severity classification from fundus images using a hybrid architecture with symptom awareness for diabetic retinopathy (DR). METHODS: We used 26,699 fundus images of 17,834 diabetic patients from three Taiwanese hospitals collected in 2007 to 2018 for DR severity classification. Thirty-seven ophthalmologists verified the images using lesion annotation and severity classification as the ground truth. Two deep learning fusion architectures were proposed: late fusion, which combines lesion and severity classification models in parallel using a postprocessing procedure, and two-stage early fusion, which combines lesion detection and classification models sequentially and mimics the decision-making process of ophthalmologists. Messidor-2 was used with 1748 images to evaluate and benchmark the performance of the architecture. The primary evaluation metrics were classification accuracy, weighted κ statistic, and area under the receiver operating characteristic curve (AUC). RESULTS: For hospital data, a hybrid architecture achieved a good detection rate, with accuracy and weighted κ of 84.29% and 84.01%, respectively, for five-class DR grading. It also classified the images of early stage DR more accurately than conventional algorithms. The Messidor-2 model achieved an AUC of 97.09% in referral DR detection compared to AUC of 85% to 99% for state-of-the-art algorithms that learned from a larger database. CONCLUSIONS: Our hybrid architectures strengthened and extracted characteristics from DR images, while improving the performance of DR grading, thereby increasing the robustness and confidence of the architectures for general use. TRANSLATIONAL RELEVANCE: The proposed fusion architectures can enable faster and more accurate diagnosis of various DR pathologies than that obtained in current manual clinical practice.