Cargando…
Causability and explainability of artificial intelligence in medicine
Explainable artificial intelligence (AI) is attracting much interest in medicine. Technically, the problem of explainability is as old as AI itself and classic AI represented comprehensible retraceable approaches. However, their weakness was in dealing with uncertainties of the real world. Through t...
Autores principales: | , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Wiley Periodicals, Inc
2019
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7017860/ https://www.ncbi.nlm.nih.gov/pubmed/32089788 http://dx.doi.org/10.1002/widm.1312 |
_version_ | 1783497264298196992 |
---|---|
author | Holzinger, Andreas Langs, Georg Denk, Helmut Zatloukal, Kurt Müller, Heimo |
author_facet | Holzinger, Andreas Langs, Georg Denk, Helmut Zatloukal, Kurt Müller, Heimo |
author_sort | Holzinger, Andreas |
collection | PubMed |
description | Explainable artificial intelligence (AI) is attracting much interest in medicine. Technically, the problem of explainability is as old as AI itself and classic AI represented comprehensible retraceable approaches. However, their weakness was in dealing with uncertainties of the real world. Through the introduction of probabilistic learning, applications became increasingly successful, but increasingly opaque. Explainable AI deals with the implementation of transparency and traceability of statistical black‐box machine learning methods, particularly deep learning (DL). We argue that there is a need to go beyond explainable AI. To reach a level of explainable medicine we need causability. In the same way that usability encompasses measurements for the quality of use, causability encompasses measurements for the quality of explanations. In this article, we provide some necessary definitions to discriminate between explainability and causability as well as a use‐case of DL interpretation and of human explanation in histopathology. The main contribution of this article is the notion of causability, which is differentiated from explainability in that causability is a property of a person, while explainability is a property of a system Fundamental Concepts of Data and Knowledge > Human Centricity and User Interaction. |
format | Online Article Text |
id | pubmed-7017860 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2019 |
publisher | Wiley Periodicals, Inc |
record_format | MEDLINE/PubMed |
spelling | pubmed-70178602020-02-20 Causability and explainability of artificial intelligence in medicine Holzinger, Andreas Langs, Georg Denk, Helmut Zatloukal, Kurt Müller, Heimo Wiley Interdiscip Rev Data Min Knowl Discov Advanced Reviews Explainable artificial intelligence (AI) is attracting much interest in medicine. Technically, the problem of explainability is as old as AI itself and classic AI represented comprehensible retraceable approaches. However, their weakness was in dealing with uncertainties of the real world. Through the introduction of probabilistic learning, applications became increasingly successful, but increasingly opaque. Explainable AI deals with the implementation of transparency and traceability of statistical black‐box machine learning methods, particularly deep learning (DL). We argue that there is a need to go beyond explainable AI. To reach a level of explainable medicine we need causability. In the same way that usability encompasses measurements for the quality of use, causability encompasses measurements for the quality of explanations. In this article, we provide some necessary definitions to discriminate between explainability and causability as well as a use‐case of DL interpretation and of human explanation in histopathology. The main contribution of this article is the notion of causability, which is differentiated from explainability in that causability is a property of a person, while explainability is a property of a system Fundamental Concepts of Data and Knowledge > Human Centricity and User Interaction. Wiley Periodicals, Inc 2019-04-02 2019 /pmc/articles/PMC7017860/ /pubmed/32089788 http://dx.doi.org/10.1002/widm.1312 Text en © 2019 The Authors. WIREs Data Mining and Knowledge Discovery published by Wiley Periodicals, Inc. This is an open access article under the terms of the http://creativecommons.org/licenses/by/4.0/ License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited. |
spellingShingle | Advanced Reviews Holzinger, Andreas Langs, Georg Denk, Helmut Zatloukal, Kurt Müller, Heimo Causability and explainability of artificial intelligence in medicine |
title | Causability and explainability of artificial intelligence in medicine |
title_full | Causability and explainability of artificial intelligence in medicine |
title_fullStr | Causability and explainability of artificial intelligence in medicine |
title_full_unstemmed | Causability and explainability of artificial intelligence in medicine |
title_short | Causability and explainability of artificial intelligence in medicine |
title_sort | causability and explainability of artificial intelligence in medicine |
topic | Advanced Reviews |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7017860/ https://www.ncbi.nlm.nih.gov/pubmed/32089788 http://dx.doi.org/10.1002/widm.1312 |
work_keys_str_mv | AT holzingerandreas causabilityandexplainabilityofartificialintelligenceinmedicine AT langsgeorg causabilityandexplainabilityofartificialintelligenceinmedicine AT denkhelmut causabilityandexplainabilityofartificialintelligenceinmedicine AT zatloukalkurt causabilityandexplainabilityofartificialintelligenceinmedicine AT mullerheimo causabilityandexplainabilityofartificialintelligenceinmedicine |