Cargando…

Explainability for artificial intelligence in healthcare: a multidisciplinary perspective

BACKGROUND: Explainability is one of the most heavily debated topics when it comes to the application of artificial intelligence (AI) in healthcare. Even though AI-driven systems have been shown to outperform humans in certain analytical tasks, the lack of explainability continues to spark criticism...

Descripción completa

Detalles Bibliográficos
Autores principales: Amann, Julia, Blasimme, Alessandro, Vayena, Effy, Frey, Dietmar, Madai, Vince I.
Formato: Online Artículo Texto
Lenguaje:English
Publicado: BioMed Central 2020
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7706019/
https://www.ncbi.nlm.nih.gov/pubmed/33256715
http://dx.doi.org/10.1186/s12911-020-01332-6
_version_ 1783617066702471168
author Amann, Julia
Blasimme, Alessandro
Vayena, Effy
Frey, Dietmar
Madai, Vince I.
author_facet Amann, Julia
Blasimme, Alessandro
Vayena, Effy
Frey, Dietmar
Madai, Vince I.
author_sort Amann, Julia
collection PubMed
description BACKGROUND: Explainability is one of the most heavily debated topics when it comes to the application of artificial intelligence (AI) in healthcare. Even though AI-driven systems have been shown to outperform humans in certain analytical tasks, the lack of explainability continues to spark criticism. Yet, explainability is not a purely technological issue, instead it invokes a host of medical, legal, ethical, and societal questions that require thorough exploration. This paper provides a comprehensive assessment of the role of explainability in medical AI and makes an ethical evaluation of what explainability means for the adoption of AI-driven tools into clinical practice. METHODS: Taking AI-based clinical decision support systems as a case in point, we adopted a multidisciplinary approach to analyze the relevance of explainability for medical AI from the technological, legal, medical, and patient perspectives. Drawing on the findings of this conceptual analysis, we then conducted an ethical assessment using the “Principles of Biomedical Ethics” by Beauchamp and Childress (autonomy, beneficence, nonmaleficence, and justice) as an analytical framework to determine the need for explainability in medical AI. RESULTS: Each of the domains highlights a different set of core considerations and values that are relevant for understanding the role of explainability in clinical practice. From the technological point of view, explainability has to be considered both in terms how it can be achieved and what is beneficial from a development perspective. When looking at the legal perspective we identified informed consent, certification and approval as medical devices, and liability as core touchpoints for explainability. Both the medical and patient perspectives emphasize the importance of considering the interplay between human actors and medical AI. We conclude that omitting explainability in clinical decision support systems poses a threat to core ethical values in medicine and may have detrimental consequences for individual and public health. CONCLUSIONS: To ensure that medical AI lives up to its promises, there is a need to sensitize developers, healthcare professionals, and legislators to the challenges and limitations of opaque algorithms in medical AI and to foster multidisciplinary collaboration moving forward.
format Online
Article
Text
id pubmed-7706019
institution National Center for Biotechnology Information
language English
publishDate 2020
publisher BioMed Central
record_format MEDLINE/PubMed
spelling pubmed-77060192020-12-01 Explainability for artificial intelligence in healthcare: a multidisciplinary perspective Amann, Julia Blasimme, Alessandro Vayena, Effy Frey, Dietmar Madai, Vince I. BMC Med Inform Decis Mak Research Article BACKGROUND: Explainability is one of the most heavily debated topics when it comes to the application of artificial intelligence (AI) in healthcare. Even though AI-driven systems have been shown to outperform humans in certain analytical tasks, the lack of explainability continues to spark criticism. Yet, explainability is not a purely technological issue, instead it invokes a host of medical, legal, ethical, and societal questions that require thorough exploration. This paper provides a comprehensive assessment of the role of explainability in medical AI and makes an ethical evaluation of what explainability means for the adoption of AI-driven tools into clinical practice. METHODS: Taking AI-based clinical decision support systems as a case in point, we adopted a multidisciplinary approach to analyze the relevance of explainability for medical AI from the technological, legal, medical, and patient perspectives. Drawing on the findings of this conceptual analysis, we then conducted an ethical assessment using the “Principles of Biomedical Ethics” by Beauchamp and Childress (autonomy, beneficence, nonmaleficence, and justice) as an analytical framework to determine the need for explainability in medical AI. RESULTS: Each of the domains highlights a different set of core considerations and values that are relevant for understanding the role of explainability in clinical practice. From the technological point of view, explainability has to be considered both in terms how it can be achieved and what is beneficial from a development perspective. When looking at the legal perspective we identified informed consent, certification and approval as medical devices, and liability as core touchpoints for explainability. Both the medical and patient perspectives emphasize the importance of considering the interplay between human actors and medical AI. We conclude that omitting explainability in clinical decision support systems poses a threat to core ethical values in medicine and may have detrimental consequences for individual and public health. CONCLUSIONS: To ensure that medical AI lives up to its promises, there is a need to sensitize developers, healthcare professionals, and legislators to the challenges and limitations of opaque algorithms in medical AI and to foster multidisciplinary collaboration moving forward. BioMed Central 2020-11-30 /pmc/articles/PMC7706019/ /pubmed/33256715 http://dx.doi.org/10.1186/s12911-020-01332-6 Text en © The Author(s) 2020 Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.
spellingShingle Research Article
Amann, Julia
Blasimme, Alessandro
Vayena, Effy
Frey, Dietmar
Madai, Vince I.
Explainability for artificial intelligence in healthcare: a multidisciplinary perspective
title Explainability for artificial intelligence in healthcare: a multidisciplinary perspective
title_full Explainability for artificial intelligence in healthcare: a multidisciplinary perspective
title_fullStr Explainability for artificial intelligence in healthcare: a multidisciplinary perspective
title_full_unstemmed Explainability for artificial intelligence in healthcare: a multidisciplinary perspective
title_short Explainability for artificial intelligence in healthcare: a multidisciplinary perspective
title_sort explainability for artificial intelligence in healthcare: a multidisciplinary perspective
topic Research Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7706019/
https://www.ncbi.nlm.nih.gov/pubmed/33256715
http://dx.doi.org/10.1186/s12911-020-01332-6
work_keys_str_mv AT amannjulia explainabilityforartificialintelligenceinhealthcareamultidisciplinaryperspective
AT blasimmealessandro explainabilityforartificialintelligenceinhealthcareamultidisciplinaryperspective
AT vayenaeffy explainabilityforartificialintelligenceinhealthcareamultidisciplinaryperspective
AT freydietmar explainabilityforartificialintelligenceinhealthcareamultidisciplinaryperspective
AT madaivincei explainabilityforartificialintelligenceinhealthcareamultidisciplinaryperspective
AT explainabilityforartificialintelligenceinhealthcareamultidisciplinaryperspective