Cargando…
Bridging the resources gap: deep learning for fluorescein angiography and optical coherence tomography macular thickness map image translation
BACKGROUND: To assess the ability of the pix2pix generative adversarial network (pix2pix GAN) to synthesize clinically useful optical coherence tomography (OCT) color-coded macular thickness maps based on a modest-sized original fluorescein angiography (FA) dataset and the reverse, to be used as a p...
Autores principales: | , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
BioMed Central
2022
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9434904/ https://www.ncbi.nlm.nih.gov/pubmed/36050661 http://dx.doi.org/10.1186/s12886-022-02577-7 |
_version_ | 1784780988113485824 |
---|---|
author | Abdelmotaal, Hazem Sharaf, Mohamed Soliman, Wael Wasfi, Ehab Kedwany, Salma M. |
author_facet | Abdelmotaal, Hazem Sharaf, Mohamed Soliman, Wael Wasfi, Ehab Kedwany, Salma M. |
author_sort | Abdelmotaal, Hazem |
collection | PubMed |
description | BACKGROUND: To assess the ability of the pix2pix generative adversarial network (pix2pix GAN) to synthesize clinically useful optical coherence tomography (OCT) color-coded macular thickness maps based on a modest-sized original fluorescein angiography (FA) dataset and the reverse, to be used as a plausible alternative to either imaging technique in patients with diabetic macular edema (DME). METHODS: Original images of 1,195 eyes of 708 nonconsecutive diabetic patients with or without DME were retrospectively analyzed. OCT macular thickness maps and corresponding FA images were preprocessed for use in training and testing the proposed pix2pix GAN. The best quality synthesized images using the test set were selected based on the Fréchet inception distance score, and their quality was studied subjectively by image readers and objectively by calculating the peak signal-to-noise ratio, structural similarity index, and Hamming distance. We also used original and synthesized images in a trained deep convolutional neural network (DCNN) to plot the difference between synthesized images and their ground-truth analogues and calculate the learned perceptual image patch similarity metric. RESULTS: The pix2pix GAN-synthesized images showed plausible subjectively and objectively assessed quality, which can provide a clinically useful alternative to either image modality. CONCLUSION: Using the pix2pix GAN to synthesize mutually dependent OCT color-coded macular thickness maps or FA images can overcome issues related to machine unavailability or clinical situations that preclude the performance of either imaging technique. TRIAL REGISTRATION: ClinicalTrials.gov Identifier: NCT05105620, November 2021. “Retrospectively registered”. |
format | Online Article Text |
id | pubmed-9434904 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2022 |
publisher | BioMed Central |
record_format | MEDLINE/PubMed |
spelling | pubmed-94349042022-09-02 Bridging the resources gap: deep learning for fluorescein angiography and optical coherence tomography macular thickness map image translation Abdelmotaal, Hazem Sharaf, Mohamed Soliman, Wael Wasfi, Ehab Kedwany, Salma M. BMC Ophthalmol Research BACKGROUND: To assess the ability of the pix2pix generative adversarial network (pix2pix GAN) to synthesize clinically useful optical coherence tomography (OCT) color-coded macular thickness maps based on a modest-sized original fluorescein angiography (FA) dataset and the reverse, to be used as a plausible alternative to either imaging technique in patients with diabetic macular edema (DME). METHODS: Original images of 1,195 eyes of 708 nonconsecutive diabetic patients with or without DME were retrospectively analyzed. OCT macular thickness maps and corresponding FA images were preprocessed for use in training and testing the proposed pix2pix GAN. The best quality synthesized images using the test set were selected based on the Fréchet inception distance score, and their quality was studied subjectively by image readers and objectively by calculating the peak signal-to-noise ratio, structural similarity index, and Hamming distance. We also used original and synthesized images in a trained deep convolutional neural network (DCNN) to plot the difference between synthesized images and their ground-truth analogues and calculate the learned perceptual image patch similarity metric. RESULTS: The pix2pix GAN-synthesized images showed plausible subjectively and objectively assessed quality, which can provide a clinically useful alternative to either image modality. CONCLUSION: Using the pix2pix GAN to synthesize mutually dependent OCT color-coded macular thickness maps or FA images can overcome issues related to machine unavailability or clinical situations that preclude the performance of either imaging technique. TRIAL REGISTRATION: ClinicalTrials.gov Identifier: NCT05105620, November 2021. “Retrospectively registered”. BioMed Central 2022-09-01 /pmc/articles/PMC9434904/ /pubmed/36050661 http://dx.doi.org/10.1186/s12886-022-02577-7 Text en © The Author(s) 2022 https://creativecommons.org/licenses/by/4.0/Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) . The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/ (https://creativecommons.org/publicdomain/zero/1.0/) ) applies to the data made available in this article, unless otherwise stated in a credit line to the data. |
spellingShingle | Research Abdelmotaal, Hazem Sharaf, Mohamed Soliman, Wael Wasfi, Ehab Kedwany, Salma M. Bridging the resources gap: deep learning for fluorescein angiography and optical coherence tomography macular thickness map image translation |
title | Bridging the resources gap: deep learning for fluorescein angiography and optical coherence tomography macular thickness map image translation |
title_full | Bridging the resources gap: deep learning for fluorescein angiography and optical coherence tomography macular thickness map image translation |
title_fullStr | Bridging the resources gap: deep learning for fluorescein angiography and optical coherence tomography macular thickness map image translation |
title_full_unstemmed | Bridging the resources gap: deep learning for fluorescein angiography and optical coherence tomography macular thickness map image translation |
title_short | Bridging the resources gap: deep learning for fluorescein angiography and optical coherence tomography macular thickness map image translation |
title_sort | bridging the resources gap: deep learning for fluorescein angiography and optical coherence tomography macular thickness map image translation |
topic | Research |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9434904/ https://www.ncbi.nlm.nih.gov/pubmed/36050661 http://dx.doi.org/10.1186/s12886-022-02577-7 |
work_keys_str_mv | AT abdelmotaalhazem bridgingtheresourcesgapdeeplearningforfluoresceinangiographyandopticalcoherencetomographymacularthicknessmapimagetranslation AT sharafmohamed bridgingtheresourcesgapdeeplearningforfluoresceinangiographyandopticalcoherencetomographymacularthicknessmapimagetranslation AT solimanwael bridgingtheresourcesgapdeeplearningforfluoresceinangiographyandopticalcoherencetomographymacularthicknessmapimagetranslation AT wasfiehab bridgingtheresourcesgapdeeplearningforfluoresceinangiographyandopticalcoherencetomographymacularthicknessmapimagetranslation AT kedwanysalmam bridgingtheresourcesgapdeeplearningforfluoresceinangiographyandopticalcoherencetomographymacularthicknessmapimagetranslation |