Cargando…

Can counterfactual explanations of AI systems’ predictions skew lay users’ causal intuitions about the world? If so, can we correct for that?

Counterfactual (CF) explanations have been employed as one of the modes of explainability in explainable artificial intelligence (AI)—both to increase the transparency of AI systems and to provide recourse. Cognitive science and psychology have pointed out that people regularly use CFs to express ca...

Descripción completa

Detalles Bibliográficos
Autores principales: Tešić, Marko, Hahn, Ulrike
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Elsevier 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9768678/
https://www.ncbi.nlm.nih.gov/pubmed/36569554
http://dx.doi.org/10.1016/j.patter.2022.100635
_version_ 1784854224163569664
author Tešić, Marko
Hahn, Ulrike
author_facet Tešić, Marko
Hahn, Ulrike
author_sort Tešić, Marko
collection PubMed
description Counterfactual (CF) explanations have been employed as one of the modes of explainability in explainable artificial intelligence (AI)—both to increase the transparency of AI systems and to provide recourse. Cognitive science and psychology have pointed out that people regularly use CFs to express causal relationships. Most AI systems, however, are only able to capture associations or correlations in data, so interpreting them as casual would not be justified. In this perspective, we present two experiments (total n = 364) exploring the effects of CF explanations of AI systems’ predictions on lay people’s causal beliefs about the real world. In Experiment 1, we found that providing CF explanations of an AI system’s predictions does indeed (unjustifiably) affect people’s causal beliefs regarding factors/features the AI uses and that people are more likely to view them as causal factors in the real world. Inspired by the literature on misinformation and health warning messaging, Experiment 2 tested whether we can correct for the unjustified change in causal beliefs. We found that pointing out that AI systems capture correlations and not necessarily causal relationships can attenuate the effects of CF explanations on people’s causal beliefs.
format Online
Article
Text
id pubmed-9768678
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher Elsevier
record_format MEDLINE/PubMed
spelling pubmed-97686782022-12-22 Can counterfactual explanations of AI systems’ predictions skew lay users’ causal intuitions about the world? If so, can we correct for that? Tešić, Marko Hahn, Ulrike Patterns (N Y) Perspective Counterfactual (CF) explanations have been employed as one of the modes of explainability in explainable artificial intelligence (AI)—both to increase the transparency of AI systems and to provide recourse. Cognitive science and psychology have pointed out that people regularly use CFs to express causal relationships. Most AI systems, however, are only able to capture associations or correlations in data, so interpreting them as casual would not be justified. In this perspective, we present two experiments (total n = 364) exploring the effects of CF explanations of AI systems’ predictions on lay people’s causal beliefs about the real world. In Experiment 1, we found that providing CF explanations of an AI system’s predictions does indeed (unjustifiably) affect people’s causal beliefs regarding factors/features the AI uses and that people are more likely to view them as causal factors in the real world. Inspired by the literature on misinformation and health warning messaging, Experiment 2 tested whether we can correct for the unjustified change in causal beliefs. We found that pointing out that AI systems capture correlations and not necessarily causal relationships can attenuate the effects of CF explanations on people’s causal beliefs. Elsevier 2022-12-09 /pmc/articles/PMC9768678/ /pubmed/36569554 http://dx.doi.org/10.1016/j.patter.2022.100635 Text en © 2022 The Author(s) https://creativecommons.org/licenses/by/4.0/This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/).
spellingShingle Perspective
Tešić, Marko
Hahn, Ulrike
Can counterfactual explanations of AI systems’ predictions skew lay users’ causal intuitions about the world? If so, can we correct for that?
title Can counterfactual explanations of AI systems’ predictions skew lay users’ causal intuitions about the world? If so, can we correct for that?
title_full Can counterfactual explanations of AI systems’ predictions skew lay users’ causal intuitions about the world? If so, can we correct for that?
title_fullStr Can counterfactual explanations of AI systems’ predictions skew lay users’ causal intuitions about the world? If so, can we correct for that?
title_full_unstemmed Can counterfactual explanations of AI systems’ predictions skew lay users’ causal intuitions about the world? If so, can we correct for that?
title_short Can counterfactual explanations of AI systems’ predictions skew lay users’ causal intuitions about the world? If so, can we correct for that?
title_sort can counterfactual explanations of ai systems’ predictions skew lay users’ causal intuitions about the world? if so, can we correct for that?
topic Perspective
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9768678/
https://www.ncbi.nlm.nih.gov/pubmed/36569554
http://dx.doi.org/10.1016/j.patter.2022.100635
work_keys_str_mv AT tesicmarko cancounterfactualexplanationsofaisystemspredictionsskewlayuserscausalintuitionsabouttheworldifsocanwecorrectforthat
AT hahnulrike cancounterfactualexplanationsofaisystemspredictionsskewlayuserscausalintuitionsabouttheworldifsocanwecorrectforthat