Cargando…

How people reason with counterfactual and causal explanations for Artificial Intelligence decisions in familiar and unfamiliar domains

Few empirical studies have examined how people understand counterfactual explanations for other people’s decisions, for example, “if you had asked for a lower amount, your loan application would have been approved”. Yet many current Artificial Intelligence (AI) decision support systems rely on count...

Descripción completa

Detalles Bibliográficos
Autores principales: Celar, Lenart, Byrne, Ruth M. J.
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Springer US 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10520145/
https://www.ncbi.nlm.nih.gov/pubmed/36964302
http://dx.doi.org/10.3758/s13421-023-01407-5
_version_ 1785109849725468672
author Celar, Lenart
Byrne, Ruth M. J.
author_facet Celar, Lenart
Byrne, Ruth M. J.
author_sort Celar, Lenart
collection PubMed
description Few empirical studies have examined how people understand counterfactual explanations for other people’s decisions, for example, “if you had asked for a lower amount, your loan application would have been approved”. Yet many current Artificial Intelligence (AI) decision support systems rely on counterfactual explanations to improve human understanding and trust. We compared counterfactual explanations to causal ones, i.e., “because you asked for a high amount, your loan application was not approved”, for an AI’s decisions in a familiar domain (alcohol and driving) and an unfamiliar one (chemical safety) in four experiments (n = 731). Participants were shown inputs to an AI system, its decisions, and an explanation for each decision; they attempted to predict the AI’s decisions, or to make their own decisions. Participants judged counterfactual explanations more helpful than causal ones, but counterfactuals did not improve the accuracy of their predictions of the AI’s decisions more than causals (Experiment 1). However, counterfactuals improved the accuracy of participants’ own decisions more than causals (Experiment 2). When the AI’s decisions were correct (Experiments 1 and 2), participants considered explanations more helpful and made more accurate judgements in the familiar domain than in the unfamiliar one; but when the AI’s decisions were incorrect, they considered explanations less helpful and made fewer accurate judgements in the familiar domain than the unfamiliar one, whether they predicted the AI’s decisions (Experiment 3a) or made their own decisions (Experiment 3b). The results corroborate the proposal that counterfactuals provide richer information than causals, because their mental representation includes more possibilities. SUPPLEMENTARY INFORMATION: The online version contains supplementary material available at 10.3758/s13421-023-01407-5.
format Online
Article
Text
id pubmed-10520145
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher Springer US
record_format MEDLINE/PubMed
spelling pubmed-105201452023-09-27 How people reason with counterfactual and causal explanations for Artificial Intelligence decisions in familiar and unfamiliar domains Celar, Lenart Byrne, Ruth M. J. Mem Cognit Article Few empirical studies have examined how people understand counterfactual explanations for other people’s decisions, for example, “if you had asked for a lower amount, your loan application would have been approved”. Yet many current Artificial Intelligence (AI) decision support systems rely on counterfactual explanations to improve human understanding and trust. We compared counterfactual explanations to causal ones, i.e., “because you asked for a high amount, your loan application was not approved”, for an AI’s decisions in a familiar domain (alcohol and driving) and an unfamiliar one (chemical safety) in four experiments (n = 731). Participants were shown inputs to an AI system, its decisions, and an explanation for each decision; they attempted to predict the AI’s decisions, or to make their own decisions. Participants judged counterfactual explanations more helpful than causal ones, but counterfactuals did not improve the accuracy of their predictions of the AI’s decisions more than causals (Experiment 1). However, counterfactuals improved the accuracy of participants’ own decisions more than causals (Experiment 2). When the AI’s decisions were correct (Experiments 1 and 2), participants considered explanations more helpful and made more accurate judgements in the familiar domain than in the unfamiliar one; but when the AI’s decisions were incorrect, they considered explanations less helpful and made fewer accurate judgements in the familiar domain than the unfamiliar one, whether they predicted the AI’s decisions (Experiment 3a) or made their own decisions (Experiment 3b). The results corroborate the proposal that counterfactuals provide richer information than causals, because their mental representation includes more possibilities. SUPPLEMENTARY INFORMATION: The online version contains supplementary material available at 10.3758/s13421-023-01407-5. Springer US 2023-03-24 2023 /pmc/articles/PMC10520145/ /pubmed/36964302 http://dx.doi.org/10.3758/s13421-023-01407-5 Text en © The Author(s) 2023 https://creativecommons.org/licenses/by/4.0/Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) .
spellingShingle Article
Celar, Lenart
Byrne, Ruth M. J.
How people reason with counterfactual and causal explanations for Artificial Intelligence decisions in familiar and unfamiliar domains
title How people reason with counterfactual and causal explanations for Artificial Intelligence decisions in familiar and unfamiliar domains
title_full How people reason with counterfactual and causal explanations for Artificial Intelligence decisions in familiar and unfamiliar domains
title_fullStr How people reason with counterfactual and causal explanations for Artificial Intelligence decisions in familiar and unfamiliar domains
title_full_unstemmed How people reason with counterfactual and causal explanations for Artificial Intelligence decisions in familiar and unfamiliar domains
title_short How people reason with counterfactual and causal explanations for Artificial Intelligence decisions in familiar and unfamiliar domains
title_sort how people reason with counterfactual and causal explanations for artificial intelligence decisions in familiar and unfamiliar domains
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10520145/
https://www.ncbi.nlm.nih.gov/pubmed/36964302
http://dx.doi.org/10.3758/s13421-023-01407-5
work_keys_str_mv AT celarlenart howpeoplereasonwithcounterfactualandcausalexplanationsforartificialintelligencedecisionsinfamiliarandunfamiliardomains
AT byrneruthmj howpeoplereasonwithcounterfactualandcausalexplanationsforartificialintelligencedecisionsinfamiliarandunfamiliardomains