Cargando…

Training calibration-based counterfactual explainers for deep learning models in medical image analysis

The rapid adoption of artificial intelligence methods in healthcare is coupled with the critical need for techniques to rigorously introspect models and thereby ensure that they behave reliably. This has led to the design of explainable AI techniques that uncover the relationships between discernibl...

Descripción completa

Detalles Bibliográficos
Autores principales: Thiagarajan, Jayaraman J., Thopalli, Kowshik, Rajan, Deepta, Turaga, Pavan
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Nature Publishing Group UK 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8755769/
https://www.ncbi.nlm.nih.gov/pubmed/35022467
http://dx.doi.org/10.1038/s41598-021-04529-5
_version_ 1784632441302941696
author Thiagarajan, Jayaraman J.
Thopalli, Kowshik
Rajan, Deepta
Turaga, Pavan
author_facet Thiagarajan, Jayaraman J.
Thopalli, Kowshik
Rajan, Deepta
Turaga, Pavan
author_sort Thiagarajan, Jayaraman J.
collection PubMed
description The rapid adoption of artificial intelligence methods in healthcare is coupled with the critical need for techniques to rigorously introspect models and thereby ensure that they behave reliably. This has led to the design of explainable AI techniques that uncover the relationships between discernible data signatures and model predictions. In this context, counterfactual explanations that synthesize small, interpretable changes to a given query while producing desired changes in model predictions have become popular. This under-constrained, inverse problem is vulnerable to introducing irrelevant feature manipulations, particularly when the model’s predictions are not well-calibrated. Hence, in this paper, we propose the TraCE (training calibration-based explainers) technique, which utilizes a novel uncertainty-based interval calibration strategy for reliably synthesizing counterfactuals. Given the wide-spread adoption of machine-learned solutions in radiology, our study focuses on deep models used for identifying anomalies in chest X-ray images. Using rigorous empirical studies, we demonstrate the superiority of TraCE explanations over several state-of-the-art baseline approaches, in terms of several widely adopted evaluation metrics. Our findings show that TraCE can be used to obtain a holistic understanding of deep models by enabling progressive exploration of decision boundaries, to detect shortcuts, and to infer relationships between patient attributes and disease severity.
format Online
Article
Text
id pubmed-8755769
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher Nature Publishing Group UK
record_format MEDLINE/PubMed
spelling pubmed-87557692022-01-14 Training calibration-based counterfactual explainers for deep learning models in medical image analysis Thiagarajan, Jayaraman J. Thopalli, Kowshik Rajan, Deepta Turaga, Pavan Sci Rep Article The rapid adoption of artificial intelligence methods in healthcare is coupled with the critical need for techniques to rigorously introspect models and thereby ensure that they behave reliably. This has led to the design of explainable AI techniques that uncover the relationships between discernible data signatures and model predictions. In this context, counterfactual explanations that synthesize small, interpretable changes to a given query while producing desired changes in model predictions have become popular. This under-constrained, inverse problem is vulnerable to introducing irrelevant feature manipulations, particularly when the model’s predictions are not well-calibrated. Hence, in this paper, we propose the TraCE (training calibration-based explainers) technique, which utilizes a novel uncertainty-based interval calibration strategy for reliably synthesizing counterfactuals. Given the wide-spread adoption of machine-learned solutions in radiology, our study focuses on deep models used for identifying anomalies in chest X-ray images. Using rigorous empirical studies, we demonstrate the superiority of TraCE explanations over several state-of-the-art baseline approaches, in terms of several widely adopted evaluation metrics. Our findings show that TraCE can be used to obtain a holistic understanding of deep models by enabling progressive exploration of decision boundaries, to detect shortcuts, and to infer relationships between patient attributes and disease severity. Nature Publishing Group UK 2022-01-12 /pmc/articles/PMC8755769/ /pubmed/35022467 http://dx.doi.org/10.1038/s41598-021-04529-5 Text en © This is a U.S. Government work and not under copyright protection in the US; foreign copyright protection may apply 2022 https://creativecommons.org/licenses/by/4.0/ Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) .
spellingShingle Article
Thiagarajan, Jayaraman J.
Thopalli, Kowshik
Rajan, Deepta
Turaga, Pavan
Training calibration-based counterfactual explainers for deep learning models in medical image analysis
title Training calibration-based counterfactual explainers for deep learning models in medical image analysis
title_full Training calibration-based counterfactual explainers for deep learning models in medical image analysis
title_fullStr Training calibration-based counterfactual explainers for deep learning models in medical image analysis
title_full_unstemmed Training calibration-based counterfactual explainers for deep learning models in medical image analysis
title_short Training calibration-based counterfactual explainers for deep learning models in medical image analysis
title_sort training calibration-based counterfactual explainers for deep learning models in medical image analysis
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8755769/
https://www.ncbi.nlm.nih.gov/pubmed/35022467
http://dx.doi.org/10.1038/s41598-021-04529-5
work_keys_str_mv AT thiagarajanjayaramanj trainingcalibrationbasedcounterfactualexplainersfordeeplearningmodelsinmedicalimageanalysis
AT thopallikowshik trainingcalibrationbasedcounterfactualexplainersfordeeplearningmodelsinmedicalimageanalysis
AT rajandeepta trainingcalibrationbasedcounterfactualexplainersfordeeplearningmodelsinmedicalimageanalysis
AT turagapavan trainingcalibrationbasedcounterfactualexplainersfordeeplearningmodelsinmedicalimageanalysis