Cargando…

Examining the effect of explanation on satisfaction and trust in AI diagnostic systems

BACKGROUND: Artificial Intelligence has the potential to revolutionize healthcare, and it is increasingly being deployed to support and assist medical diagnosis. One potential application of AI is as the first point of contact for patients, replacing initial diagnoses prior to sending a patient to a...

Descripción completa

Detalles Bibliográficos
Autores principales: Alam, Lamia, Mueller, Shane
Formato: Online Artículo Texto
Lenguaje:English
Publicado: BioMed Central 2021
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8176739/
https://www.ncbi.nlm.nih.gov/pubmed/34082719
http://dx.doi.org/10.1186/s12911-021-01542-6
_version_ 1783703308545818624
author Alam, Lamia
Mueller, Shane
author_facet Alam, Lamia
Mueller, Shane
author_sort Alam, Lamia
collection PubMed
description BACKGROUND: Artificial Intelligence has the potential to revolutionize healthcare, and it is increasingly being deployed to support and assist medical diagnosis. One potential application of AI is as the first point of contact for patients, replacing initial diagnoses prior to sending a patient to a specialist, allowing health care professionals to focus on more challenging and critical aspects of treatment. But for AI systems to succeed in this role, it will not be enough for them to merely provide accurate diagnoses and predictions. In addition, it will need to provide explanations (both to physicians and patients) about why the diagnoses are made. Without this, accurate and correct diagnoses and treatments might otherwise be ignored or rejected. METHOD: It is important to evaluate the effectiveness of these explanations and understand the relative effectiveness of different kinds of explanations. In this paper, we examine this problem across two simulation experiments. For the first experiment, we tested a re-diagnosis scenario to understand the effect of local and global explanations. In a second simulation experiment, we implemented different forms of explanation in a similar diagnosis scenario. RESULTS: Results show that explanation helps improve satisfaction measures during the critical re-diagnosis period but had little effect before re-diagnosis (when initial treatment was taking place) or after (when an alternate diagnosis resolved the case successfully). Furthermore, initial “global” explanations about the process had no impact on immediate satisfaction but improved later judgments of understanding about the AI. Results of the second experiment show that visual and example-based explanations integrated with rationales had a significantly better impact on patient satisfaction and trust than no explanations, or with text-based rationales alone. As in Experiment 1, these explanations had their effect primarily on immediate measures of satisfaction during the re-diagnosis crisis, with little advantage prior to re-diagnosis or once the diagnosis was successfully resolved. CONCLUSION: These two studies help us to draw several conclusions about how patient-facing explanatory diagnostic systems may succeed or fail. Based on these studies and the review of the literature, we will provide some design recommendations for the explanations offered for AI systems in the healthcare domain. SUPPLEMENTARY INFORMATION: The online version contains supplementary material available at 10.1186/s12911-021-01542-6.
format Online
Article
Text
id pubmed-8176739
institution National Center for Biotechnology Information
language English
publishDate 2021
publisher BioMed Central
record_format MEDLINE/PubMed
spelling pubmed-81767392021-06-04 Examining the effect of explanation on satisfaction and trust in AI diagnostic systems Alam, Lamia Mueller, Shane BMC Med Inform Decis Mak Research BACKGROUND: Artificial Intelligence has the potential to revolutionize healthcare, and it is increasingly being deployed to support and assist medical diagnosis. One potential application of AI is as the first point of contact for patients, replacing initial diagnoses prior to sending a patient to a specialist, allowing health care professionals to focus on more challenging and critical aspects of treatment. But for AI systems to succeed in this role, it will not be enough for them to merely provide accurate diagnoses and predictions. In addition, it will need to provide explanations (both to physicians and patients) about why the diagnoses are made. Without this, accurate and correct diagnoses and treatments might otherwise be ignored or rejected. METHOD: It is important to evaluate the effectiveness of these explanations and understand the relative effectiveness of different kinds of explanations. In this paper, we examine this problem across two simulation experiments. For the first experiment, we tested a re-diagnosis scenario to understand the effect of local and global explanations. In a second simulation experiment, we implemented different forms of explanation in a similar diagnosis scenario. RESULTS: Results show that explanation helps improve satisfaction measures during the critical re-diagnosis period but had little effect before re-diagnosis (when initial treatment was taking place) or after (when an alternate diagnosis resolved the case successfully). Furthermore, initial “global” explanations about the process had no impact on immediate satisfaction but improved later judgments of understanding about the AI. Results of the second experiment show that visual and example-based explanations integrated with rationales had a significantly better impact on patient satisfaction and trust than no explanations, or with text-based rationales alone. As in Experiment 1, these explanations had their effect primarily on immediate measures of satisfaction during the re-diagnosis crisis, with little advantage prior to re-diagnosis or once the diagnosis was successfully resolved. CONCLUSION: These two studies help us to draw several conclusions about how patient-facing explanatory diagnostic systems may succeed or fail. Based on these studies and the review of the literature, we will provide some design recommendations for the explanations offered for AI systems in the healthcare domain. SUPPLEMENTARY INFORMATION: The online version contains supplementary material available at 10.1186/s12911-021-01542-6. BioMed Central 2021-06-03 /pmc/articles/PMC8176739/ /pubmed/34082719 http://dx.doi.org/10.1186/s12911-021-01542-6 Text en © The Author(s) 2021 https://creativecommons.org/licenses/by/4.0/Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) . The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/ (https://creativecommons.org/publicdomain/zero/1.0/) ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.
spellingShingle Research
Alam, Lamia
Mueller, Shane
Examining the effect of explanation on satisfaction and trust in AI diagnostic systems
title Examining the effect of explanation on satisfaction and trust in AI diagnostic systems
title_full Examining the effect of explanation on satisfaction and trust in AI diagnostic systems
title_fullStr Examining the effect of explanation on satisfaction and trust in AI diagnostic systems
title_full_unstemmed Examining the effect of explanation on satisfaction and trust in AI diagnostic systems
title_short Examining the effect of explanation on satisfaction and trust in AI diagnostic systems
title_sort examining the effect of explanation on satisfaction and trust in ai diagnostic systems
topic Research
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8176739/
https://www.ncbi.nlm.nih.gov/pubmed/34082719
http://dx.doi.org/10.1186/s12911-021-01542-6
work_keys_str_mv AT alamlamia examiningtheeffectofexplanationonsatisfactionandtrustinaidiagnosticsystems
AT muellershane examiningtheeffectofexplanationonsatisfactionandtrustinaidiagnosticsystems