Cargando…

Semi‐supervised classification of fundus images combined with CNN and GCN

PURPOSE: Diabetic retinopathy (DR) is one of the most serious complications of diabetes, which is a kind of fundus lesion with specific changes. Early diagnosis of DR can effectively reduce the visual damage caused by DR. Due to the variety and different morphology of DR lesions, automatic classific...

Descripción completa

Detalles Bibliográficos
Autores principales: Duan, Sixu, Huang, Pu, Chen, Min, Wang, Ting, Sun, Xiaolei, Chen, Meirong, Dong, Xueyuan, Jiang, Zekun, Li, Dengwang
Formato: Online Artículo Texto
Lenguaje:English
Publicado: John Wiley and Sons Inc. 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9797168/
https://www.ncbi.nlm.nih.gov/pubmed/35946866
http://dx.doi.org/10.1002/acm2.13746
Descripción
Sumario:PURPOSE: Diabetic retinopathy (DR) is one of the most serious complications of diabetes, which is a kind of fundus lesion with specific changes. Early diagnosis of DR can effectively reduce the visual damage caused by DR. Due to the variety and different morphology of DR lesions, automatic classification of fundus images in mass screening can greatly save clinicians' diagnosis time. To alleviate these problems, in this paper, we propose a novel framework—graph attentional convolutional neural network (GACNN). METHODS AND MATERIALS: The network consists of convolutional neural network (CNN) and graph convolutional network (GCN). The global and spatial features of fundus images are extracted by using CNN and GCN, and attention mechanism is introduced to enhance the adaptability of GCN to topology map. We adopt semi‐supervised method for classification, which greatly improves the generalization ability of the network. RESULTS: In order to verify the effectiveness of the network, we conducted comparative experiments and ablation experiments. We use confusion matrix, precision, recall, kappa score, and accuracy as evaluation indexes. With the increase of the labeling rates, the classification accuracy is higher. Particularly, when the labeling rate is set to 100%, the classification accuracy of GACNN reaches 93.35%. Compared with DenseNet121, the accuracy rate is improved by 6.24%. CONCLUSIONS: Semi‐supervised classification based on attention mechanism can effectively improve the classification performance of the model, and attain preferable results in classification indexes such as accuracy and recall. GACNN provides a feasible classification scheme for fundus images, which effectively reduces the screening human resources.