Cargando…

Explainability and causability in digital pathology

The current move towards digital pathology enables pathologists to use artificial intelligence (AI)‐based computer programmes for the advanced analysis of whole slide images. However, currently, the best‐performing AI algorithms for image analysis are deemed black boxes since it remains – even to th...

Descripción completa

Detalles Bibliográficos
Autores principales: Plass, Markus, Kargl, Michaela, Kiehl, Tim‐Rasmus, Regitnig, Peter, Geißler, Christian, Evans, Theodore, Zerbe, Norman, Carvalho, Rita, Holzinger, Andreas, Müller, Heimo
Formato: Online Artículo Texto
Lenguaje:English
Publicado: John Wiley & Sons, Inc. 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10240147/
https://www.ncbi.nlm.nih.gov/pubmed/37045794
http://dx.doi.org/10.1002/cjp2.322
_version_ 1785053686867689472
author Plass, Markus
Kargl, Michaela
Kiehl, Tim‐Rasmus
Regitnig, Peter
Geißler, Christian
Evans, Theodore
Zerbe, Norman
Carvalho, Rita
Holzinger, Andreas
Müller, Heimo
author_facet Plass, Markus
Kargl, Michaela
Kiehl, Tim‐Rasmus
Regitnig, Peter
Geißler, Christian
Evans, Theodore
Zerbe, Norman
Carvalho, Rita
Holzinger, Andreas
Müller, Heimo
author_sort Plass, Markus
collection PubMed
description The current move towards digital pathology enables pathologists to use artificial intelligence (AI)‐based computer programmes for the advanced analysis of whole slide images. However, currently, the best‐performing AI algorithms for image analysis are deemed black boxes since it remains – even to their developers – often unclear why the algorithm delivered a particular result. Especially in medicine, a better understanding of algorithmic decisions is essential to avoid mistakes and adverse effects on patients. This review article aims to provide medical experts with insights on the issue of explainability in digital pathology. A short introduction to the relevant underlying core concepts of machine learning shall nurture the reader's understanding of why explainability is a specific issue in this field. Addressing this issue of explainability, the rapidly evolving research field of explainable AI (XAI) has developed many techniques and methods to make black‐box machine‐learning systems more transparent. These XAI methods are a first step towards making black‐box AI systems understandable by humans. However, we argue that an explanation interface must complement these explainable models to make their results useful to human stakeholders and achieve a high level of causability, i.e. a high level of causal understanding by the user. This is especially relevant in the medical field since explainability and causability play a crucial role also for compliance with regulatory requirements. We conclude by promoting the need for novel user interfaces for AI applications in pathology, which enable contextual understanding and allow the medical expert to ask interactive ‘what‐if’‐questions. In pathology, such user interfaces will not only be important to achieve a high level of causability. They will also be crucial for keeping the human‐in‐the‐loop and bringing medical experts' experience and conceptual knowledge to AI processes.
format Online
Article
Text
id pubmed-10240147
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher John Wiley & Sons, Inc.
record_format MEDLINE/PubMed
spelling pubmed-102401472023-06-06 Explainability and causability in digital pathology Plass, Markus Kargl, Michaela Kiehl, Tim‐Rasmus Regitnig, Peter Geißler, Christian Evans, Theodore Zerbe, Norman Carvalho, Rita Holzinger, Andreas Müller, Heimo J Pathol Clin Res Invited Review The current move towards digital pathology enables pathologists to use artificial intelligence (AI)‐based computer programmes for the advanced analysis of whole slide images. However, currently, the best‐performing AI algorithms for image analysis are deemed black boxes since it remains – even to their developers – often unclear why the algorithm delivered a particular result. Especially in medicine, a better understanding of algorithmic decisions is essential to avoid mistakes and adverse effects on patients. This review article aims to provide medical experts with insights on the issue of explainability in digital pathology. A short introduction to the relevant underlying core concepts of machine learning shall nurture the reader's understanding of why explainability is a specific issue in this field. Addressing this issue of explainability, the rapidly evolving research field of explainable AI (XAI) has developed many techniques and methods to make black‐box machine‐learning systems more transparent. These XAI methods are a first step towards making black‐box AI systems understandable by humans. However, we argue that an explanation interface must complement these explainable models to make their results useful to human stakeholders and achieve a high level of causability, i.e. a high level of causal understanding by the user. This is especially relevant in the medical field since explainability and causability play a crucial role also for compliance with regulatory requirements. We conclude by promoting the need for novel user interfaces for AI applications in pathology, which enable contextual understanding and allow the medical expert to ask interactive ‘what‐if’‐questions. In pathology, such user interfaces will not only be important to achieve a high level of causability. They will also be crucial for keeping the human‐in‐the‐loop and bringing medical experts' experience and conceptual knowledge to AI processes. John Wiley & Sons, Inc. 2023-04-12 /pmc/articles/PMC10240147/ /pubmed/37045794 http://dx.doi.org/10.1002/cjp2.322 Text en © 2023 The Authors. The Journal of Pathology: Clinical Research published by The Pathological Society of Great Britain and Ireland and John Wiley & Sons Ltd. https://creativecommons.org/licenses/by/4.0/This is an open access article under the terms of the http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited.
spellingShingle Invited Review
Plass, Markus
Kargl, Michaela
Kiehl, Tim‐Rasmus
Regitnig, Peter
Geißler, Christian
Evans, Theodore
Zerbe, Norman
Carvalho, Rita
Holzinger, Andreas
Müller, Heimo
Explainability and causability in digital pathology
title Explainability and causability in digital pathology
title_full Explainability and causability in digital pathology
title_fullStr Explainability and causability in digital pathology
title_full_unstemmed Explainability and causability in digital pathology
title_short Explainability and causability in digital pathology
title_sort explainability and causability in digital pathology
topic Invited Review
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10240147/
https://www.ncbi.nlm.nih.gov/pubmed/37045794
http://dx.doi.org/10.1002/cjp2.322
work_keys_str_mv AT plassmarkus explainabilityandcausabilityindigitalpathology
AT karglmichaela explainabilityandcausabilityindigitalpathology
AT kiehltimrasmus explainabilityandcausabilityindigitalpathology
AT regitnigpeter explainabilityandcausabilityindigitalpathology
AT geißlerchristian explainabilityandcausabilityindigitalpathology
AT evanstheodore explainabilityandcausabilityindigitalpathology
AT zerbenorman explainabilityandcausabilityindigitalpathology
AT carvalhorita explainabilityandcausabilityindigitalpathology
AT holzingerandreas explainabilityandcausabilityindigitalpathology
AT mullerheimo explainabilityandcausabilityindigitalpathology