Cargando…

To explain or not to explain?—Artificial intelligence explainability in clinical decision support systems

Explainability for artificial intelligence (AI) in medicine is a hotly debated topic. Our paper presents a review of the key arguments in favor and against explainability for AI-powered Clinical Decision Support System (CDSS) applied to a concrete use case, namely an AI-powered CDSS currently used i...

Descripción completa

Detalles Bibliográficos
Autores principales: Amann, Julia, Vetter, Dennis, Blomberg, Stig Nikolaj, Christensen, Helle Collatz, Coffee, Megan, Gerke, Sara, Gilbert, Thomas K., Hagendorff, Thilo, Holm, Sune, Livne, Michelle, Spezzatti, Andy, Strümke, Inga, Zicari, Roberto V., Madai, Vince Istvan
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Public Library of Science 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9931364/
https://www.ncbi.nlm.nih.gov/pubmed/36812545
http://dx.doi.org/10.1371/journal.pdig.0000016
_version_ 1784889233216897024
author Amann, Julia
Vetter, Dennis
Blomberg, Stig Nikolaj
Christensen, Helle Collatz
Coffee, Megan
Gerke, Sara
Gilbert, Thomas K.
Hagendorff, Thilo
Holm, Sune
Livne, Michelle
Spezzatti, Andy
Strümke, Inga
Zicari, Roberto V.
Madai, Vince Istvan
author_facet Amann, Julia
Vetter, Dennis
Blomberg, Stig Nikolaj
Christensen, Helle Collatz
Coffee, Megan
Gerke, Sara
Gilbert, Thomas K.
Hagendorff, Thilo
Holm, Sune
Livne, Michelle
Spezzatti, Andy
Strümke, Inga
Zicari, Roberto V.
Madai, Vince Istvan
author_sort Amann, Julia
collection PubMed
description Explainability for artificial intelligence (AI) in medicine is a hotly debated topic. Our paper presents a review of the key arguments in favor and against explainability for AI-powered Clinical Decision Support System (CDSS) applied to a concrete use case, namely an AI-powered CDSS currently used in the emergency call setting to identify patients with life-threatening cardiac arrest. More specifically, we performed a normative analysis using socio-technical scenarios to provide a nuanced account of the role of explainability for CDSSs for the concrete use case, allowing for abstractions to a more general level. Our analysis focused on three layers: technical considerations, human factors, and the designated system role in decision-making. Our findings suggest that whether explainability can provide added value to CDSS depends on several key questions: technical feasibility, the level of validation in case of explainable algorithms, the characteristics of the context in which the system is implemented, the designated role in the decision-making process, and the key user group(s). Thus, each CDSS will require an individualized assessment of explainability needs and we provide an example of how such an assessment could look like in practice.
format Online
Article
Text
id pubmed-9931364
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher Public Library of Science
record_format MEDLINE/PubMed
spelling pubmed-99313642023-02-16 To explain or not to explain?—Artificial intelligence explainability in clinical decision support systems Amann, Julia Vetter, Dennis Blomberg, Stig Nikolaj Christensen, Helle Collatz Coffee, Megan Gerke, Sara Gilbert, Thomas K. Hagendorff, Thilo Holm, Sune Livne, Michelle Spezzatti, Andy Strümke, Inga Zicari, Roberto V. Madai, Vince Istvan PLOS Digit Health Research Article Explainability for artificial intelligence (AI) in medicine is a hotly debated topic. Our paper presents a review of the key arguments in favor and against explainability for AI-powered Clinical Decision Support System (CDSS) applied to a concrete use case, namely an AI-powered CDSS currently used in the emergency call setting to identify patients with life-threatening cardiac arrest. More specifically, we performed a normative analysis using socio-technical scenarios to provide a nuanced account of the role of explainability for CDSSs for the concrete use case, allowing for abstractions to a more general level. Our analysis focused on three layers: technical considerations, human factors, and the designated system role in decision-making. Our findings suggest that whether explainability can provide added value to CDSS depends on several key questions: technical feasibility, the level of validation in case of explainable algorithms, the characteristics of the context in which the system is implemented, the designated role in the decision-making process, and the key user group(s). Thus, each CDSS will require an individualized assessment of explainability needs and we provide an example of how such an assessment could look like in practice. Public Library of Science 2022-02-17 /pmc/articles/PMC9931364/ /pubmed/36812545 http://dx.doi.org/10.1371/journal.pdig.0000016 Text en © 2022 Amann et al https://creativecommons.org/licenses/by/4.0/This is an open access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/) , which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
spellingShingle Research Article
Amann, Julia
Vetter, Dennis
Blomberg, Stig Nikolaj
Christensen, Helle Collatz
Coffee, Megan
Gerke, Sara
Gilbert, Thomas K.
Hagendorff, Thilo
Holm, Sune
Livne, Michelle
Spezzatti, Andy
Strümke, Inga
Zicari, Roberto V.
Madai, Vince Istvan
To explain or not to explain?—Artificial intelligence explainability in clinical decision support systems
title To explain or not to explain?—Artificial intelligence explainability in clinical decision support systems
title_full To explain or not to explain?—Artificial intelligence explainability in clinical decision support systems
title_fullStr To explain or not to explain?—Artificial intelligence explainability in clinical decision support systems
title_full_unstemmed To explain or not to explain?—Artificial intelligence explainability in clinical decision support systems
title_short To explain or not to explain?—Artificial intelligence explainability in clinical decision support systems
title_sort to explain or not to explain?—artificial intelligence explainability in clinical decision support systems
topic Research Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9931364/
https://www.ncbi.nlm.nih.gov/pubmed/36812545
http://dx.doi.org/10.1371/journal.pdig.0000016
work_keys_str_mv AT amannjulia toexplainornottoexplainartificialintelligenceexplainabilityinclinicaldecisionsupportsystems
AT vetterdennis toexplainornottoexplainartificialintelligenceexplainabilityinclinicaldecisionsupportsystems
AT blombergstignikolaj toexplainornottoexplainartificialintelligenceexplainabilityinclinicaldecisionsupportsystems
AT christensenhellecollatz toexplainornottoexplainartificialintelligenceexplainabilityinclinicaldecisionsupportsystems
AT coffeemegan toexplainornottoexplainartificialintelligenceexplainabilityinclinicaldecisionsupportsystems
AT gerkesara toexplainornottoexplainartificialintelligenceexplainabilityinclinicaldecisionsupportsystems
AT gilbertthomask toexplainornottoexplainartificialintelligenceexplainabilityinclinicaldecisionsupportsystems
AT hagendorffthilo toexplainornottoexplainartificialintelligenceexplainabilityinclinicaldecisionsupportsystems
AT holmsune toexplainornottoexplainartificialintelligenceexplainabilityinclinicaldecisionsupportsystems
AT livnemichelle toexplainornottoexplainartificialintelligenceexplainabilityinclinicaldecisionsupportsystems
AT spezzattiandy toexplainornottoexplainartificialintelligenceexplainabilityinclinicaldecisionsupportsystems
AT strumkeinga toexplainornottoexplainartificialintelligenceexplainabilityinclinicaldecisionsupportsystems
AT zicarirobertov toexplainornottoexplainartificialintelligenceexplainabilityinclinicaldecisionsupportsystems
AT madaivinceistvan toexplainornottoexplainartificialintelligenceexplainabilityinclinicaldecisionsupportsystems
AT toexplainornottoexplainartificialintelligenceexplainabilityinclinicaldecisionsupportsystems