Cargando…
Explainability in medicine in an era of AI-based clinical decision support systems
The combination of “Big Data” and Artificial Intelligence (AI) is frequently promoted as having the potential to deliver valuable health benefits when applied to medical decision-making. However, the responsible adoption of AI-based clinical decision support systems faces several challenges at both...
Autores principales: | , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Frontiers Media S.A.
2022
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9527344/ https://www.ncbi.nlm.nih.gov/pubmed/36199569 http://dx.doi.org/10.3389/fgene.2022.903600 |
_version_ | 1784801064815427584 |
---|---|
author | Pierce, Robin L. Van Biesen, Wim Van Cauwenberge, Daan Decruyenaere, Johan Sterckx, Sigrid |
author_facet | Pierce, Robin L. Van Biesen, Wim Van Cauwenberge, Daan Decruyenaere, Johan Sterckx, Sigrid |
author_sort | Pierce, Robin L. |
collection | PubMed |
description | The combination of “Big Data” and Artificial Intelligence (AI) is frequently promoted as having the potential to deliver valuable health benefits when applied to medical decision-making. However, the responsible adoption of AI-based clinical decision support systems faces several challenges at both the individual and societal level. One of the features that has given rise to particular concern is the issue of explainability, since, if the way an algorithm arrived at a particular output is not known (or knowable) to a physician, this may lead to multiple challenges, including an inability to evaluate the merits of the output. This “opacity” problem has led to questions about whether physicians are justified in relying on the algorithmic output, with some scholars insisting on the centrality of explainability, while others see no reason to require of AI that which is not required of physicians. We consider that there is merit in both views but find that greater nuance is necessary in order to elucidate the underlying function of explainability in clinical practice and, therefore, its relevance in the context of AI for clinical use. In this paper, we explore explainability by examining what it requires in clinical medicine and draw a distinction between the function of explainability for the current patient versus the future patient. This distinction has implications for what explainability requires in the short and long term. We highlight the role of transparency in explainability, and identify semantic transparency as fundamental to the issue of explainability itself. We argue that, in day-to-day clinical practice, accuracy is sufficient as an “epistemic warrant” for clinical decision-making, and that the most compelling reason for requiring explainability in the sense of scientific or causal explanation is the potential for improving future care by building a more robust model of the world. We identify the goal of clinical decision-making as being to deliver the best possible outcome as often as possible, and find—that accuracy is sufficient justification for intervention for today’s patient, as long as efforts to uncover scientific explanations continue to improve healthcare for future patients. |
format | Online Article Text |
id | pubmed-9527344 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2022 |
publisher | Frontiers Media S.A. |
record_format | MEDLINE/PubMed |
spelling | pubmed-95273442022-10-04 Explainability in medicine in an era of AI-based clinical decision support systems Pierce, Robin L. Van Biesen, Wim Van Cauwenberge, Daan Decruyenaere, Johan Sterckx, Sigrid Front Genet Genetics The combination of “Big Data” and Artificial Intelligence (AI) is frequently promoted as having the potential to deliver valuable health benefits when applied to medical decision-making. However, the responsible adoption of AI-based clinical decision support systems faces several challenges at both the individual and societal level. One of the features that has given rise to particular concern is the issue of explainability, since, if the way an algorithm arrived at a particular output is not known (or knowable) to a physician, this may lead to multiple challenges, including an inability to evaluate the merits of the output. This “opacity” problem has led to questions about whether physicians are justified in relying on the algorithmic output, with some scholars insisting on the centrality of explainability, while others see no reason to require of AI that which is not required of physicians. We consider that there is merit in both views but find that greater nuance is necessary in order to elucidate the underlying function of explainability in clinical practice and, therefore, its relevance in the context of AI for clinical use. In this paper, we explore explainability by examining what it requires in clinical medicine and draw a distinction between the function of explainability for the current patient versus the future patient. This distinction has implications for what explainability requires in the short and long term. We highlight the role of transparency in explainability, and identify semantic transparency as fundamental to the issue of explainability itself. We argue that, in day-to-day clinical practice, accuracy is sufficient as an “epistemic warrant” for clinical decision-making, and that the most compelling reason for requiring explainability in the sense of scientific or causal explanation is the potential for improving future care by building a more robust model of the world. We identify the goal of clinical decision-making as being to deliver the best possible outcome as often as possible, and find—that accuracy is sufficient justification for intervention for today’s patient, as long as efforts to uncover scientific explanations continue to improve healthcare for future patients. Frontiers Media S.A. 2022-09-19 /pmc/articles/PMC9527344/ /pubmed/36199569 http://dx.doi.org/10.3389/fgene.2022.903600 Text en Copyright © 2022 Pierce, Van Biesen, Van Cauwenberge, Decruyenaere and Sterckx. https://creativecommons.org/licenses/by/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms. |
spellingShingle | Genetics Pierce, Robin L. Van Biesen, Wim Van Cauwenberge, Daan Decruyenaere, Johan Sterckx, Sigrid Explainability in medicine in an era of AI-based clinical decision support systems |
title | Explainability in medicine in an era of AI-based clinical decision support systems |
title_full | Explainability in medicine in an era of AI-based clinical decision support systems |
title_fullStr | Explainability in medicine in an era of AI-based clinical decision support systems |
title_full_unstemmed | Explainability in medicine in an era of AI-based clinical decision support systems |
title_short | Explainability in medicine in an era of AI-based clinical decision support systems |
title_sort | explainability in medicine in an era of ai-based clinical decision support systems |
topic | Genetics |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9527344/ https://www.ncbi.nlm.nih.gov/pubmed/36199569 http://dx.doi.org/10.3389/fgene.2022.903600 |
work_keys_str_mv | AT piercerobinl explainabilityinmedicineinaneraofaibasedclinicaldecisionsupportsystems AT vanbiesenwim explainabilityinmedicineinaneraofaibasedclinicaldecisionsupportsystems AT vancauwenbergedaan explainabilityinmedicineinaneraofaibasedclinicaldecisionsupportsystems AT decruyenaerejohan explainabilityinmedicineinaneraofaibasedclinicaldecisionsupportsystems AT sterckxsigrid explainabilityinmedicineinaneraofaibasedclinicaldecisionsupportsystems |