Cargando…

GANterfactual—Counterfactual Explanations for Medical Non-experts Using Generative Adversarial Learning

With the ongoing rise of machine learning, the need for methods for explaining decisions made by artificial intelligence systems is becoming a more and more important topic. Especially for image classification tasks, many state-of-the-art tools to explain such classifiers rely on visual highlighting...

Descripción completa

Detalles Bibliográficos
Autores principales: Mertes, Silvan, Huber, Tobias, Weitz, Katharina, Heimerl, Alexander, André, Elisabeth
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Frontiers Media S.A. 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9024220/
https://www.ncbi.nlm.nih.gov/pubmed/35464995
http://dx.doi.org/10.3389/frai.2022.825565
_version_ 1784690526023319552
author Mertes, Silvan
Huber, Tobias
Weitz, Katharina
Heimerl, Alexander
André, Elisabeth
author_facet Mertes, Silvan
Huber, Tobias
Weitz, Katharina
Heimerl, Alexander
André, Elisabeth
author_sort Mertes, Silvan
collection PubMed
description With the ongoing rise of machine learning, the need for methods for explaining decisions made by artificial intelligence systems is becoming a more and more important topic. Especially for image classification tasks, many state-of-the-art tools to explain such classifiers rely on visual highlighting of important areas of the input data. Contrary, counterfactual explanation systems try to enable a counterfactual reasoning by modifying the input image in a way such that the classifier would have made a different prediction. By doing so, the users of counterfactual explanation systems are equipped with a completely different kind of explanatory information. However, methods for generating realistic counterfactual explanations for image classifiers are still rare. Especially in medical contexts, where relevant information often consists of textural and structural information, high-quality counterfactual images have the potential to give meaningful insights into decision processes. In this work, we present GANterfactual, an approach to generate such counterfactual image explanations based on adversarial image-to-image translation techniques. Additionally, we conduct a user study to evaluate our approach in an exemplary medical use case. Our results show that, in the chosen medical use-case, counterfactual explanations lead to significantly better results regarding mental models, explanation satisfaction, trust, emotions, and self-efficacy than two state-of-the art systems that work with saliency maps, namely LIME and LRP.
format Online
Article
Text
id pubmed-9024220
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher Frontiers Media S.A.
record_format MEDLINE/PubMed
spelling pubmed-90242202022-04-23 GANterfactual—Counterfactual Explanations for Medical Non-experts Using Generative Adversarial Learning Mertes, Silvan Huber, Tobias Weitz, Katharina Heimerl, Alexander André, Elisabeth Front Artif Intell Artificial Intelligence With the ongoing rise of machine learning, the need for methods for explaining decisions made by artificial intelligence systems is becoming a more and more important topic. Especially for image classification tasks, many state-of-the-art tools to explain such classifiers rely on visual highlighting of important areas of the input data. Contrary, counterfactual explanation systems try to enable a counterfactual reasoning by modifying the input image in a way such that the classifier would have made a different prediction. By doing so, the users of counterfactual explanation systems are equipped with a completely different kind of explanatory information. However, methods for generating realistic counterfactual explanations for image classifiers are still rare. Especially in medical contexts, where relevant information often consists of textural and structural information, high-quality counterfactual images have the potential to give meaningful insights into decision processes. In this work, we present GANterfactual, an approach to generate such counterfactual image explanations based on adversarial image-to-image translation techniques. Additionally, we conduct a user study to evaluate our approach in an exemplary medical use case. Our results show that, in the chosen medical use-case, counterfactual explanations lead to significantly better results regarding mental models, explanation satisfaction, trust, emotions, and self-efficacy than two state-of-the art systems that work with saliency maps, namely LIME and LRP. Frontiers Media S.A. 2022-04-08 /pmc/articles/PMC9024220/ /pubmed/35464995 http://dx.doi.org/10.3389/frai.2022.825565 Text en Copyright © 2022 Mertes, Huber, Weitz, Heimerl and André. https://creativecommons.org/licenses/by/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
spellingShingle Artificial Intelligence
Mertes, Silvan
Huber, Tobias
Weitz, Katharina
Heimerl, Alexander
André, Elisabeth
GANterfactual—Counterfactual Explanations for Medical Non-experts Using Generative Adversarial Learning
title GANterfactual—Counterfactual Explanations for Medical Non-experts Using Generative Adversarial Learning
title_full GANterfactual—Counterfactual Explanations for Medical Non-experts Using Generative Adversarial Learning
title_fullStr GANterfactual—Counterfactual Explanations for Medical Non-experts Using Generative Adversarial Learning
title_full_unstemmed GANterfactual—Counterfactual Explanations for Medical Non-experts Using Generative Adversarial Learning
title_short GANterfactual—Counterfactual Explanations for Medical Non-experts Using Generative Adversarial Learning
title_sort ganterfactual—counterfactual explanations for medical non-experts using generative adversarial learning
topic Artificial Intelligence
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9024220/
https://www.ncbi.nlm.nih.gov/pubmed/35464995
http://dx.doi.org/10.3389/frai.2022.825565
work_keys_str_mv AT mertessilvan ganterfactualcounterfactualexplanationsformedicalnonexpertsusinggenerativeadversariallearning
AT hubertobias ganterfactualcounterfactualexplanationsformedicalnonexpertsusinggenerativeadversariallearning
AT weitzkatharina ganterfactualcounterfactualexplanationsformedicalnonexpertsusinggenerativeadversariallearning
AT heimerlalexander ganterfactualcounterfactualexplanationsformedicalnonexpertsusinggenerativeadversariallearning
AT andreelisabeth ganterfactualcounterfactualexplanationsformedicalnonexpertsusinggenerativeadversariallearning