Cargando…

Deep-learning-based AI for evaluating estimated nonperfusion areas requiring further examination in ultra-widefield fundus images

We herein propose a PraNet-based deep-learning model for estimating the size of non-perfusion area (NPA) in pseudo-color fundus photos from an ultra-wide-field (UWF) image. We trained the model with focal loss and weighted binary cross-entropy loss to deal with the class-imbalanced dataset, and opti...

Descripción completa

Detalles Bibliográficos
Autores principales: Inoda, Satoru, Takahashi, Hidenori, Yamagata, Hitoshi, Hisadome, Yoichiro, Kondo, Yusuke, Tampo, Hironobu, Sakamoto, Shinichi, Katada, Yusaku, Kurihara, Toshihide, Kawashima, Hidetoshi, Yanagi, Yasuo
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Nature Publishing Group UK 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9759556/
https://www.ncbi.nlm.nih.gov/pubmed/36528737
http://dx.doi.org/10.1038/s41598-022-25894-9
_version_ 1784852259000025088
author Inoda, Satoru
Takahashi, Hidenori
Yamagata, Hitoshi
Hisadome, Yoichiro
Kondo, Yusuke
Tampo, Hironobu
Sakamoto, Shinichi
Katada, Yusaku
Kurihara, Toshihide
Kawashima, Hidetoshi
Yanagi, Yasuo
author_facet Inoda, Satoru
Takahashi, Hidenori
Yamagata, Hitoshi
Hisadome, Yoichiro
Kondo, Yusuke
Tampo, Hironobu
Sakamoto, Shinichi
Katada, Yusaku
Kurihara, Toshihide
Kawashima, Hidetoshi
Yanagi, Yasuo
author_sort Inoda, Satoru
collection PubMed
description We herein propose a PraNet-based deep-learning model for estimating the size of non-perfusion area (NPA) in pseudo-color fundus photos from an ultra-wide-field (UWF) image. We trained the model with focal loss and weighted binary cross-entropy loss to deal with the class-imbalanced dataset, and optimized hyperparameters in order to minimize validation loss. As expected, the resultant PraNet-based deep-learning model outperformed previously published methods. For verification, we used UWF fundus images with NPA and used Bland–Altman plots to compare estimated NPA with the ground truth in FA, which demonstrated that bias between the eNPA and ground truth was smaller than 10% of the confidence limits zone and that the number of outliers was less than 10% of observed paired images. The accuracy of the model was also tested on an external dataset from another institution, which confirmed the generalization of the model. For validation, we employed a contingency table for ROC analysis to judge the sensitivity and specificity of the estimated-NPA (eNPA). The results demonstrated that the sensitivity and specificity ranged from 83.3–87.0% and 79.3–85.7%, respectively. In conclusion, we developed an AI model capable of estimating NPA size from only an UWF image without angiography using PraNet-based deep learning. This is a potentially useful tool in monitoring eyes with ischemic retinal diseases.
format Online
Article
Text
id pubmed-9759556
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher Nature Publishing Group UK
record_format MEDLINE/PubMed
spelling pubmed-97595562022-12-19 Deep-learning-based AI for evaluating estimated nonperfusion areas requiring further examination in ultra-widefield fundus images Inoda, Satoru Takahashi, Hidenori Yamagata, Hitoshi Hisadome, Yoichiro Kondo, Yusuke Tampo, Hironobu Sakamoto, Shinichi Katada, Yusaku Kurihara, Toshihide Kawashima, Hidetoshi Yanagi, Yasuo Sci Rep Article We herein propose a PraNet-based deep-learning model for estimating the size of non-perfusion area (NPA) in pseudo-color fundus photos from an ultra-wide-field (UWF) image. We trained the model with focal loss and weighted binary cross-entropy loss to deal with the class-imbalanced dataset, and optimized hyperparameters in order to minimize validation loss. As expected, the resultant PraNet-based deep-learning model outperformed previously published methods. For verification, we used UWF fundus images with NPA and used Bland–Altman plots to compare estimated NPA with the ground truth in FA, which demonstrated that bias between the eNPA and ground truth was smaller than 10% of the confidence limits zone and that the number of outliers was less than 10% of observed paired images. The accuracy of the model was also tested on an external dataset from another institution, which confirmed the generalization of the model. For validation, we employed a contingency table for ROC analysis to judge the sensitivity and specificity of the estimated-NPA (eNPA). The results demonstrated that the sensitivity and specificity ranged from 83.3–87.0% and 79.3–85.7%, respectively. In conclusion, we developed an AI model capable of estimating NPA size from only an UWF image without angiography using PraNet-based deep learning. This is a potentially useful tool in monitoring eyes with ischemic retinal diseases. Nature Publishing Group UK 2022-12-17 /pmc/articles/PMC9759556/ /pubmed/36528737 http://dx.doi.org/10.1038/s41598-022-25894-9 Text en © The Author(s) 2022 https://creativecommons.org/licenses/by/4.0/Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) .
spellingShingle Article
Inoda, Satoru
Takahashi, Hidenori
Yamagata, Hitoshi
Hisadome, Yoichiro
Kondo, Yusuke
Tampo, Hironobu
Sakamoto, Shinichi
Katada, Yusaku
Kurihara, Toshihide
Kawashima, Hidetoshi
Yanagi, Yasuo
Deep-learning-based AI for evaluating estimated nonperfusion areas requiring further examination in ultra-widefield fundus images
title Deep-learning-based AI for evaluating estimated nonperfusion areas requiring further examination in ultra-widefield fundus images
title_full Deep-learning-based AI for evaluating estimated nonperfusion areas requiring further examination in ultra-widefield fundus images
title_fullStr Deep-learning-based AI for evaluating estimated nonperfusion areas requiring further examination in ultra-widefield fundus images
title_full_unstemmed Deep-learning-based AI for evaluating estimated nonperfusion areas requiring further examination in ultra-widefield fundus images
title_short Deep-learning-based AI for evaluating estimated nonperfusion areas requiring further examination in ultra-widefield fundus images
title_sort deep-learning-based ai for evaluating estimated nonperfusion areas requiring further examination in ultra-widefield fundus images
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9759556/
https://www.ncbi.nlm.nih.gov/pubmed/36528737
http://dx.doi.org/10.1038/s41598-022-25894-9
work_keys_str_mv AT inodasatoru deeplearningbasedaiforevaluatingestimatednonperfusionareasrequiringfurtherexaminationinultrawidefieldfundusimages
AT takahashihidenori deeplearningbasedaiforevaluatingestimatednonperfusionareasrequiringfurtherexaminationinultrawidefieldfundusimages
AT yamagatahitoshi deeplearningbasedaiforevaluatingestimatednonperfusionareasrequiringfurtherexaminationinultrawidefieldfundusimages
AT hisadomeyoichiro deeplearningbasedaiforevaluatingestimatednonperfusionareasrequiringfurtherexaminationinultrawidefieldfundusimages
AT kondoyusuke deeplearningbasedaiforevaluatingestimatednonperfusionareasrequiringfurtherexaminationinultrawidefieldfundusimages
AT tampohironobu deeplearningbasedaiforevaluatingestimatednonperfusionareasrequiringfurtherexaminationinultrawidefieldfundusimages
AT sakamotoshinichi deeplearningbasedaiforevaluatingestimatednonperfusionareasrequiringfurtherexaminationinultrawidefieldfundusimages
AT katadayusaku deeplearningbasedaiforevaluatingestimatednonperfusionareasrequiringfurtherexaminationinultrawidefieldfundusimages
AT kuriharatoshihide deeplearningbasedaiforevaluatingestimatednonperfusionareasrequiringfurtherexaminationinultrawidefieldfundusimages
AT kawashimahidetoshi deeplearningbasedaiforevaluatingestimatednonperfusionareasrequiringfurtherexaminationinultrawidefieldfundusimages
AT yanagiyasuo deeplearningbasedaiforevaluatingestimatednonperfusionareasrequiringfurtherexaminationinultrawidefieldfundusimages