Cargando…

Automated Diabetic Retinopathy Detection Using Horizontal and Vertical Patch Division-Based Pre-Trained DenseNET with Digital Fundus Images

Diabetic retinopathy (DR) is a common complication of diabetes that can lead to progressive vision loss. Regular surveillance with fundal photography, early diagnosis, and prompt intervention are paramount to reducing the incidence of DR-induced vision loss. However, manual interpretation of fundal...

Descripción completa

Detalles Bibliográficos
Autores principales: Kobat, Sabiha Gungor, Baygin, Nursena, Yusufoglu, Elif, Baygin, Mehmet, Barua, Prabal Datta, Dogan, Sengul, Yaman, Orhan, Celiker, Ulku, Yildirim, Hakan, Tan, Ru-San, Tuncer, Turker, Islam, Nazrul, Acharya, U. Rajendra
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9406859/
https://www.ncbi.nlm.nih.gov/pubmed/36010325
http://dx.doi.org/10.3390/diagnostics12081975
_version_ 1784774224260366336
author Kobat, Sabiha Gungor
Baygin, Nursena
Yusufoglu, Elif
Baygin, Mehmet
Barua, Prabal Datta
Dogan, Sengul
Yaman, Orhan
Celiker, Ulku
Yildirim, Hakan
Tan, Ru-San
Tuncer, Turker
Islam, Nazrul
Acharya, U. Rajendra
author_facet Kobat, Sabiha Gungor
Baygin, Nursena
Yusufoglu, Elif
Baygin, Mehmet
Barua, Prabal Datta
Dogan, Sengul
Yaman, Orhan
Celiker, Ulku
Yildirim, Hakan
Tan, Ru-San
Tuncer, Turker
Islam, Nazrul
Acharya, U. Rajendra
author_sort Kobat, Sabiha Gungor
collection PubMed
description Diabetic retinopathy (DR) is a common complication of diabetes that can lead to progressive vision loss. Regular surveillance with fundal photography, early diagnosis, and prompt intervention are paramount to reducing the incidence of DR-induced vision loss. However, manual interpretation of fundal photographs is subject to human error. In this study, a new method based on horizontal and vertical patch division was proposed for the automated classification of DR images on fundal photographs. The novel sides of this study are given as follows. We proposed a new non-fixed-size patch division model to obtain high classification results and collected a new fundus image dataset. Moreover, two datasets are used to test the model: a newly collected three-class (normal, non-proliferative DR, and proliferative DR) dataset comprising 2355 DR images and the established open-access five-class Asia Pacific Tele-Ophthalmology Society (APTOS) 2019 dataset comprising 3662 images. Two analysis scenarios, Case 1 and Case 2, with three (normal, non-proliferative DR, and proliferative DR) and five classes (normal, mild DR, moderate DR, severe DR, and proliferative DR), respectively, were derived from the APTOS 2019 dataset. These datasets and these cases have been used to demonstrate the general classification performance of our proposal. By applying transfer learning, the last fully connected and global average pooling layers of the DenseNet201 architecture were used to extract deep features from input DR images and each of the eight subdivided horizontal and vertical patches. The most discriminative features are then selected using neighborhood component analysis. These were fed as input to a standard shallow cubic support vector machine for classification. Our new DR dataset obtained 94.06% and 91.55% accuracy values for three-class classification with 80:20 hold-out validation and 10-fold cross-validation, respectively. As can be seen from steps of the proposed model, a new patch-based deep-feature engineering model has been proposed. The proposed deep-feature engineering model is a cognitive model, since it uses efficient methods in each phase. Similar excellent results were seen for three-class classification with the Case 1 dataset. In addition, the model attained 87.43% and 84.90% five-class classification accuracy rates using 80:20 hold-out validation and 10-fold cross-validation, respectively, on the Case 2 dataset, which outperformed prior DR classification studies based on the five-class APTOS 2019 dataset. Our model attained about >2% classification results compared to others. These findings demonstrate the accuracy and robustness of the proposed model for classification of DR images.
format Online
Article
Text
id pubmed-9406859
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-94068592022-08-26 Automated Diabetic Retinopathy Detection Using Horizontal and Vertical Patch Division-Based Pre-Trained DenseNET with Digital Fundus Images Kobat, Sabiha Gungor Baygin, Nursena Yusufoglu, Elif Baygin, Mehmet Barua, Prabal Datta Dogan, Sengul Yaman, Orhan Celiker, Ulku Yildirim, Hakan Tan, Ru-San Tuncer, Turker Islam, Nazrul Acharya, U. Rajendra Diagnostics (Basel) Article Diabetic retinopathy (DR) is a common complication of diabetes that can lead to progressive vision loss. Regular surveillance with fundal photography, early diagnosis, and prompt intervention are paramount to reducing the incidence of DR-induced vision loss. However, manual interpretation of fundal photographs is subject to human error. In this study, a new method based on horizontal and vertical patch division was proposed for the automated classification of DR images on fundal photographs. The novel sides of this study are given as follows. We proposed a new non-fixed-size patch division model to obtain high classification results and collected a new fundus image dataset. Moreover, two datasets are used to test the model: a newly collected three-class (normal, non-proliferative DR, and proliferative DR) dataset comprising 2355 DR images and the established open-access five-class Asia Pacific Tele-Ophthalmology Society (APTOS) 2019 dataset comprising 3662 images. Two analysis scenarios, Case 1 and Case 2, with three (normal, non-proliferative DR, and proliferative DR) and five classes (normal, mild DR, moderate DR, severe DR, and proliferative DR), respectively, were derived from the APTOS 2019 dataset. These datasets and these cases have been used to demonstrate the general classification performance of our proposal. By applying transfer learning, the last fully connected and global average pooling layers of the DenseNet201 architecture were used to extract deep features from input DR images and each of the eight subdivided horizontal and vertical patches. The most discriminative features are then selected using neighborhood component analysis. These were fed as input to a standard shallow cubic support vector machine for classification. Our new DR dataset obtained 94.06% and 91.55% accuracy values for three-class classification with 80:20 hold-out validation and 10-fold cross-validation, respectively. As can be seen from steps of the proposed model, a new patch-based deep-feature engineering model has been proposed. The proposed deep-feature engineering model is a cognitive model, since it uses efficient methods in each phase. Similar excellent results were seen for three-class classification with the Case 1 dataset. In addition, the model attained 87.43% and 84.90% five-class classification accuracy rates using 80:20 hold-out validation and 10-fold cross-validation, respectively, on the Case 2 dataset, which outperformed prior DR classification studies based on the five-class APTOS 2019 dataset. Our model attained about >2% classification results compared to others. These findings demonstrate the accuracy and robustness of the proposed model for classification of DR images. MDPI 2022-08-15 /pmc/articles/PMC9406859/ /pubmed/36010325 http://dx.doi.org/10.3390/diagnostics12081975 Text en © 2022 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Kobat, Sabiha Gungor
Baygin, Nursena
Yusufoglu, Elif
Baygin, Mehmet
Barua, Prabal Datta
Dogan, Sengul
Yaman, Orhan
Celiker, Ulku
Yildirim, Hakan
Tan, Ru-San
Tuncer, Turker
Islam, Nazrul
Acharya, U. Rajendra
Automated Diabetic Retinopathy Detection Using Horizontal and Vertical Patch Division-Based Pre-Trained DenseNET with Digital Fundus Images
title Automated Diabetic Retinopathy Detection Using Horizontal and Vertical Patch Division-Based Pre-Trained DenseNET with Digital Fundus Images
title_full Automated Diabetic Retinopathy Detection Using Horizontal and Vertical Patch Division-Based Pre-Trained DenseNET with Digital Fundus Images
title_fullStr Automated Diabetic Retinopathy Detection Using Horizontal and Vertical Patch Division-Based Pre-Trained DenseNET with Digital Fundus Images
title_full_unstemmed Automated Diabetic Retinopathy Detection Using Horizontal and Vertical Patch Division-Based Pre-Trained DenseNET with Digital Fundus Images
title_short Automated Diabetic Retinopathy Detection Using Horizontal and Vertical Patch Division-Based Pre-Trained DenseNET with Digital Fundus Images
title_sort automated diabetic retinopathy detection using horizontal and vertical patch division-based pre-trained densenet with digital fundus images
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9406859/
https://www.ncbi.nlm.nih.gov/pubmed/36010325
http://dx.doi.org/10.3390/diagnostics12081975
work_keys_str_mv AT kobatsabihagungor automateddiabeticretinopathydetectionusinghorizontalandverticalpatchdivisionbasedpretraineddensenetwithdigitalfundusimages
AT bayginnursena automateddiabeticretinopathydetectionusinghorizontalandverticalpatchdivisionbasedpretraineddensenetwithdigitalfundusimages
AT yusufogluelif automateddiabeticretinopathydetectionusinghorizontalandverticalpatchdivisionbasedpretraineddensenetwithdigitalfundusimages
AT bayginmehmet automateddiabeticretinopathydetectionusinghorizontalandverticalpatchdivisionbasedpretraineddensenetwithdigitalfundusimages
AT baruaprabaldatta automateddiabeticretinopathydetectionusinghorizontalandverticalpatchdivisionbasedpretraineddensenetwithdigitalfundusimages
AT dogansengul automateddiabeticretinopathydetectionusinghorizontalandverticalpatchdivisionbasedpretraineddensenetwithdigitalfundusimages
AT yamanorhan automateddiabeticretinopathydetectionusinghorizontalandverticalpatchdivisionbasedpretraineddensenetwithdigitalfundusimages
AT celikerulku automateddiabeticretinopathydetectionusinghorizontalandverticalpatchdivisionbasedpretraineddensenetwithdigitalfundusimages
AT yildirimhakan automateddiabeticretinopathydetectionusinghorizontalandverticalpatchdivisionbasedpretraineddensenetwithdigitalfundusimages
AT tanrusan automateddiabeticretinopathydetectionusinghorizontalandverticalpatchdivisionbasedpretraineddensenetwithdigitalfundusimages
AT tuncerturker automateddiabeticretinopathydetectionusinghorizontalandverticalpatchdivisionbasedpretraineddensenetwithdigitalfundusimages
AT islamnazrul automateddiabeticretinopathydetectionusinghorizontalandverticalpatchdivisionbasedpretraineddensenetwithdigitalfundusimages
AT acharyaurajendra automateddiabeticretinopathydetectionusinghorizontalandverticalpatchdivisionbasedpretraineddensenetwithdigitalfundusimages