Cargando…
A GAN-based deep enhancer for quality enhancement of retinal images photographed by a handheld fundus camera
OBJECTIVE: Due to limited imaging conditions, the quality of fundus images is often unsatisfactory, especially for images photographed by handheld fundus cameras. Here, we have developed an automated method based on combining two mirror-symmetric generative adversarial networks (GANs) for image enha...
Autores principales: | , , , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Elsevier
2022
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10577846/ https://www.ncbi.nlm.nih.gov/pubmed/37846289 http://dx.doi.org/10.1016/j.aopr.2022.100077 |
Sumario: | OBJECTIVE: Due to limited imaging conditions, the quality of fundus images is often unsatisfactory, especially for images photographed by handheld fundus cameras. Here, we have developed an automated method based on combining two mirror-symmetric generative adversarial networks (GANs) for image enhancement. METHODS: A total of 1047 retinal images were included. The raw images were enhanced by a GAN-based deep enhancer and another methods based on luminosity and contrast adjustment. All raw images and enhanced images were anonymously assessed and classified into 6 levels of quality classification by three experienced ophthalmologists. The quality classification and quality change of images were compared. In addition, image-detailed reading results for the number of dubiously pathological fundi were also compared. RESULTS: After GAN enhancement, 42.9% of images increased their quality, 37.5% remained stable, and 19.6% decreased. After excluding the images at the highest level (level 0) before enhancement, a large number (75.6%) of images showed an increase in quality classification, and only a minority (9.3%) showed a decrease. The GAN-enhanced method was superior for quality improvement over a luminosity and contrast adjustment method (P<0.001). In terms of image reading results, the consistency rate fluctuated from 86.6% to 95.6%, and for the specific disease subtypes, both discrepancy number and discrepancy rate were less than 15 and 15%, for two ophthalmologists. CONCLUSIONS: Learning the style of high-quality retinal images based on the proposed deep enhancer may be an effective way to improve the quality of retinal images photographed by handheld fundus cameras. |
---|