Cargando…

Putting explainable AI in context: institutional explanations for medical AI

There is a current debate about if, and in what sense, machine learning systems used in the medical context need to be explainable. Those arguing in favor contend these systems require post hoc explanations for each individual decision to increase trust and ensure accurate diagnoses. Those arguing a...

Descripción completa

Detalles Bibliográficos
Autores principales: Theunissen, Mark, Browning, Jacob
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Springer Netherlands 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9073821/
https://www.ncbi.nlm.nih.gov/pubmed/35539962
http://dx.doi.org/10.1007/s10676-022-09649-8
_version_ 1784701372021604352
author Theunissen, Mark
Browning, Jacob
author_facet Theunissen, Mark
Browning, Jacob
author_sort Theunissen, Mark
collection PubMed
description There is a current debate about if, and in what sense, machine learning systems used in the medical context need to be explainable. Those arguing in favor contend these systems require post hoc explanations for each individual decision to increase trust and ensure accurate diagnoses. Those arguing against suggest the high accuracy and reliability of the systems is sufficient for providing epistemic justified beliefs without the need for explaining each individual decision. But, as we show, both solutions have limitations—and it is unclear either address the epistemic worries of the medical professionals using these systems. We argue these systems do require an explanation, but an institutional explanation. These types of explanations provide the reasons why the medical professional should rely on the system in practice—that is, they focus on trying to address the epistemic concerns of those using the system in specific contexts and specific occasions. But ensuring that these institutional explanations are fit for purpose means ensuring the institutions designing and deploying these systems are transparent about the assumptions baked into the system. This requires coordination with experts and end-users concerning how it will function in the field, the metrics used to evaluate its accuracy, and the procedures for auditing the system to prevent biases and failures from going unaddressed. We contend this broader explanation is necessary for either post hoc explanations or accuracy scores to be epistemically meaningful to the medical professional, making it possible for them to rely on these systems as effective and useful tools in their practices.
format Online
Article
Text
id pubmed-9073821
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher Springer Netherlands
record_format MEDLINE/PubMed
spelling pubmed-90738212022-05-06 Putting explainable AI in context: institutional explanations for medical AI Theunissen, Mark Browning, Jacob Ethics Inf Technol Original Paper There is a current debate about if, and in what sense, machine learning systems used in the medical context need to be explainable. Those arguing in favor contend these systems require post hoc explanations for each individual decision to increase trust and ensure accurate diagnoses. Those arguing against suggest the high accuracy and reliability of the systems is sufficient for providing epistemic justified beliefs without the need for explaining each individual decision. But, as we show, both solutions have limitations—and it is unclear either address the epistemic worries of the medical professionals using these systems. We argue these systems do require an explanation, but an institutional explanation. These types of explanations provide the reasons why the medical professional should rely on the system in practice—that is, they focus on trying to address the epistemic concerns of those using the system in specific contexts and specific occasions. But ensuring that these institutional explanations are fit for purpose means ensuring the institutions designing and deploying these systems are transparent about the assumptions baked into the system. This requires coordination with experts and end-users concerning how it will function in the field, the metrics used to evaluate its accuracy, and the procedures for auditing the system to prevent biases and failures from going unaddressed. We contend this broader explanation is necessary for either post hoc explanations or accuracy scores to be epistemically meaningful to the medical professional, making it possible for them to rely on these systems as effective and useful tools in their practices. Springer Netherlands 2022-05-06 2022 /pmc/articles/PMC9073821/ /pubmed/35539962 http://dx.doi.org/10.1007/s10676-022-09649-8 Text en © The Author(s) 2022 https://creativecommons.org/licenses/by/4.0/Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) .
spellingShingle Original Paper
Theunissen, Mark
Browning, Jacob
Putting explainable AI in context: institutional explanations for medical AI
title Putting explainable AI in context: institutional explanations for medical AI
title_full Putting explainable AI in context: institutional explanations for medical AI
title_fullStr Putting explainable AI in context: institutional explanations for medical AI
title_full_unstemmed Putting explainable AI in context: institutional explanations for medical AI
title_short Putting explainable AI in context: institutional explanations for medical AI
title_sort putting explainable ai in context: institutional explanations for medical ai
topic Original Paper
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9073821/
https://www.ncbi.nlm.nih.gov/pubmed/35539962
http://dx.doi.org/10.1007/s10676-022-09649-8
work_keys_str_mv AT theunissenmark puttingexplainableaiincontextinstitutionalexplanationsformedicalai
AT browningjacob puttingexplainableaiincontextinstitutionalexplanationsformedicalai