Cargando…

A deep learning-based framework for retinal fundus image enhancement

PROBLEM: Low-quality fundus images with complex degredation can cause costly re-examinations of patients or inaccurate clinical diagnosis. AIM: This study aims to create an automatic fundus macular image enhancement framework to improve low-quality fundus images and remove complex image degradation....

Descripción completa

Detalles Bibliográficos
Autores principales: Lee, Kang Geon, Song, Su Jeong, Lee, Soochahn, Yu, Hyeong Gon, Kim, Dong Ik, Lee, Kyoung Mu
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Public Library of Science 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10019688/
https://www.ncbi.nlm.nih.gov/pubmed/36928209
http://dx.doi.org/10.1371/journal.pone.0282416
Descripción
Sumario:PROBLEM: Low-quality fundus images with complex degredation can cause costly re-examinations of patients or inaccurate clinical diagnosis. AIM: This study aims to create an automatic fundus macular image enhancement framework to improve low-quality fundus images and remove complex image degradation. METHOD: We propose a new deep learning-based model that automatically enhances low-quality retinal fundus images that suffer from complex degradation. We collected a dataset, comprising 1068 pairs of high-quality (HQ) and low-quality (LQ) fundus images from the Kangbuk Samsung Hospital’s health screening program and ophthalmology department from 2017 to 2019. Then, we used these dataset to develop data augmentation methods to simulate major aspects of retinal image degradation and to propose a customized convolutional neural network (CNN) architecture to enhance LQ images, depending on the nature of the degradation. Peak signal-to-noise ratio (PSNR), structural similarity index measure (SSIM), r-value (linear index of fuzziness), and proportion of ungradable fundus photographs before and after the enhancement process are calculated to assess the performance of proposed model. A comparative evaluation is conducted on an external database and four different open-source databases. RESULTS: The results of the evaluation on the external test dataset showed an significant increase in PSNR and SSIM compared with the original LQ images. Moreover, PSNR and SSIM increased by over 4 dB and 0.04, respectively compared with the previous state-of-the-art methods (P < 0.05). The proportion of ungradable fundus photographs decreased from 42.6% to 26.4% (P = 0.012). CONCLUSION: Our enhancement process improves LQ fundus images that suffer from complex degradation significantly. Moreover our customized CNN achieved improved performance over the existing state-of-the-art methods. Overall, our framework can have a clinical impact on reducing re-examinations and improving the accuracy of diagnosis.