Cargando…

Evaluation of Explainable Deep Learning Methods for Ophthalmic Diagnosis

BACKGROUND: The lack of explanations for the decisions made by deep learning algorithms has hampered their acceptance by the clinical community despite highly accurate results on multiple problems. Attribution methods explaining deep learning models have been tested on medical imaging problems. The...

Descripción completa

Detalles Bibliográficos
Autores principales: Singh, Amitojdeep, Jothi Balaji, Janarthanam, Rasheed, Mohammed Abdul, Jayakumar, Varadharajan, Raman, Rajiv, Lakshminarayanan, Vasudevan
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Dove 2021
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8219310/
https://www.ncbi.nlm.nih.gov/pubmed/34177258
http://dx.doi.org/10.2147/OPTH.S312236
_version_ 1783710899861716992
author Singh, Amitojdeep
Jothi Balaji, Janarthanam
Rasheed, Mohammed Abdul
Jayakumar, Varadharajan
Raman, Rajiv
Lakshminarayanan, Vasudevan
author_facet Singh, Amitojdeep
Jothi Balaji, Janarthanam
Rasheed, Mohammed Abdul
Jayakumar, Varadharajan
Raman, Rajiv
Lakshminarayanan, Vasudevan
author_sort Singh, Amitojdeep
collection PubMed
description BACKGROUND: The lack of explanations for the decisions made by deep learning algorithms has hampered their acceptance by the clinical community despite highly accurate results on multiple problems. Attribution methods explaining deep learning models have been tested on medical imaging problems. The performance of various attribution methods has been compared for models trained on standard machine learning datasets but not on medical images. In this study, we performed a comparative analysis to determine the method with the best explanations for retinal OCT diagnosis. METHODS: A well-known deep learning model, Inception-v3 was trained to diagnose 3 retinal diseases – choroidal neovascularization (CNV), diabetic macular edema (DME), and drusen. The explanations from 13 different attribution methods were rated by a panel of 14 clinicians for clinical significance. Feedback was obtained from the clinicians regarding the current and future scope of such methods. RESULTS: An attribution method based on Taylor series expansion, called Deep Taylor, was rated the highest by clinicians with a median rating of 3.85/5. It was followed by Guided backpropagation (GBP), and SHapley Additive exPlanations (SHAP). CONCLUSION: Explanations from the top methods were able to highlight the structures for each disease – fluid accumulation for CNV, the boundaries of edema for DME, and bumpy areas of retinal pigment epithelium (RPE) for drusen. The most suitable method for a specific medical diagnosis task may be different from the one considered best for conventional tasks. Overall, there was a high degree of acceptance from the clinicians surveyed in the study.
format Online
Article
Text
id pubmed-8219310
institution National Center for Biotechnology Information
language English
publishDate 2021
publisher Dove
record_format MEDLINE/PubMed
spelling pubmed-82193102021-06-24 Evaluation of Explainable Deep Learning Methods for Ophthalmic Diagnosis Singh, Amitojdeep Jothi Balaji, Janarthanam Rasheed, Mohammed Abdul Jayakumar, Varadharajan Raman, Rajiv Lakshminarayanan, Vasudevan Clin Ophthalmol Original Research BACKGROUND: The lack of explanations for the decisions made by deep learning algorithms has hampered their acceptance by the clinical community despite highly accurate results on multiple problems. Attribution methods explaining deep learning models have been tested on medical imaging problems. The performance of various attribution methods has been compared for models trained on standard machine learning datasets but not on medical images. In this study, we performed a comparative analysis to determine the method with the best explanations for retinal OCT diagnosis. METHODS: A well-known deep learning model, Inception-v3 was trained to diagnose 3 retinal diseases – choroidal neovascularization (CNV), diabetic macular edema (DME), and drusen. The explanations from 13 different attribution methods were rated by a panel of 14 clinicians for clinical significance. Feedback was obtained from the clinicians regarding the current and future scope of such methods. RESULTS: An attribution method based on Taylor series expansion, called Deep Taylor, was rated the highest by clinicians with a median rating of 3.85/5. It was followed by Guided backpropagation (GBP), and SHapley Additive exPlanations (SHAP). CONCLUSION: Explanations from the top methods were able to highlight the structures for each disease – fluid accumulation for CNV, the boundaries of edema for DME, and bumpy areas of retinal pigment epithelium (RPE) for drusen. The most suitable method for a specific medical diagnosis task may be different from the one considered best for conventional tasks. Overall, there was a high degree of acceptance from the clinicians surveyed in the study. Dove 2021-06-18 /pmc/articles/PMC8219310/ /pubmed/34177258 http://dx.doi.org/10.2147/OPTH.S312236 Text en © 2021 Singh et al. https://creativecommons.org/licenses/by-nc/3.0/This work is published and licensed by Dove Medical Press Limited. The full terms of this license are available at https://www.dovepress.com/terms.php and incorporate the Creative Commons Attribution – Non Commercial (unported, v3.0) License (http://creativecommons.org/licenses/by-nc/3.0/ (https://creativecommons.org/licenses/by-nc/3.0/) ). By accessing the work you hereby accept the Terms. Non-commercial uses of the work are permitted without any further permission from Dove Medical Press Limited, provided the work is properly attributed. For permission for commercial use of this work, please see paragraphs 4.2 and 5 of our Terms (https://www.dovepress.com/terms.php).
spellingShingle Original Research
Singh, Amitojdeep
Jothi Balaji, Janarthanam
Rasheed, Mohammed Abdul
Jayakumar, Varadharajan
Raman, Rajiv
Lakshminarayanan, Vasudevan
Evaluation of Explainable Deep Learning Methods for Ophthalmic Diagnosis
title Evaluation of Explainable Deep Learning Methods for Ophthalmic Diagnosis
title_full Evaluation of Explainable Deep Learning Methods for Ophthalmic Diagnosis
title_fullStr Evaluation of Explainable Deep Learning Methods for Ophthalmic Diagnosis
title_full_unstemmed Evaluation of Explainable Deep Learning Methods for Ophthalmic Diagnosis
title_short Evaluation of Explainable Deep Learning Methods for Ophthalmic Diagnosis
title_sort evaluation of explainable deep learning methods for ophthalmic diagnosis
topic Original Research
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8219310/
https://www.ncbi.nlm.nih.gov/pubmed/34177258
http://dx.doi.org/10.2147/OPTH.S312236
work_keys_str_mv AT singhamitojdeep evaluationofexplainabledeeplearningmethodsforophthalmicdiagnosis
AT jothibalajijanarthanam evaluationofexplainabledeeplearningmethodsforophthalmicdiagnosis
AT rasheedmohammedabdul evaluationofexplainabledeeplearningmethodsforophthalmicdiagnosis
AT jayakumarvaradharajan evaluationofexplainabledeeplearningmethodsforophthalmicdiagnosis
AT ramanrajiv evaluationofexplainabledeeplearningmethodsforophthalmicdiagnosis
AT lakshminarayananvasudevan evaluationofexplainabledeeplearningmethodsforophthalmicdiagnosis