Cargando…

Measuring the Quality of Explanations: The System Causability Scale (SCS): Comparing Human and Machine Explanations

Recent success in Artificial Intelligence (AI) and Machine Learning (ML) allow problem solving automatically without any human intervention. Autonomous approaches can be very convenient. However, in certain domains, e.g., in the medical domain, it is necessary to enable a domain expert to understand...

Descripción completa

Detalles Bibliográficos
Autores principales: Holzinger, Andreas, Carrington, André, Müller, Heimo
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Springer Berlin Heidelberg 2020
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7271052/
https://www.ncbi.nlm.nih.gov/pubmed/32549653
http://dx.doi.org/10.1007/s13218-020-00636-z
_version_ 1783542017185284096
author Holzinger, Andreas
Carrington, André
Müller, Heimo
author_facet Holzinger, Andreas
Carrington, André
Müller, Heimo
author_sort Holzinger, Andreas
collection PubMed
description Recent success in Artificial Intelligence (AI) and Machine Learning (ML) allow problem solving automatically without any human intervention. Autonomous approaches can be very convenient. However, in certain domains, e.g., in the medical domain, it is necessary to enable a domain expert to understand, why an algorithm came up with a certain result. Consequently, the field of Explainable AI (xAI) rapidly gained interest worldwide in various domains, particularly in medicine. Explainable AI studies transparency and traceability of opaque AI/ML and there are already a huge variety of methods. For example with layer-wise relevance propagation relevant parts of inputs to, and representations in, a neural network which caused a result, can be highlighted. This is a first important step to ensure that end users, e.g., medical professionals, assume responsibility for decision making with AI/ML and of interest to professionals and regulators. Interactive ML adds the component of human expertise to AI/ML processes by enabling them to re-enact and retrace AI/ML results, e.g. let them check it for plausibility. This requires new human–AI interfaces for explainable AI. In order to build effective and efficient interactive human–AI interfaces we have to deal with the question of how to evaluate the quality of explanations given by an explainable AI system. In this paper we introduce our System Causability Scale to measure the quality of explanations. It is based on our notion of Causability (Holzinger et al. in Wiley Interdiscip Rev Data Min Knowl Discov 9(4), 2019) combined with concepts adapted from a widely-accepted usability scale.
format Online
Article
Text
id pubmed-7271052
institution National Center for Biotechnology Information
language English
publishDate 2020
publisher Springer Berlin Heidelberg
record_format MEDLINE/PubMed
spelling pubmed-72710522020-06-15 Measuring the Quality of Explanations: The System Causability Scale (SCS): Comparing Human and Machine Explanations Holzinger, Andreas Carrington, André Müller, Heimo Kunstliche Intell (Oldenbourg) Technical Contribution Recent success in Artificial Intelligence (AI) and Machine Learning (ML) allow problem solving automatically without any human intervention. Autonomous approaches can be very convenient. However, in certain domains, e.g., in the medical domain, it is necessary to enable a domain expert to understand, why an algorithm came up with a certain result. Consequently, the field of Explainable AI (xAI) rapidly gained interest worldwide in various domains, particularly in medicine. Explainable AI studies transparency and traceability of opaque AI/ML and there are already a huge variety of methods. For example with layer-wise relevance propagation relevant parts of inputs to, and representations in, a neural network which caused a result, can be highlighted. This is a first important step to ensure that end users, e.g., medical professionals, assume responsibility for decision making with AI/ML and of interest to professionals and regulators. Interactive ML adds the component of human expertise to AI/ML processes by enabling them to re-enact and retrace AI/ML results, e.g. let them check it for plausibility. This requires new human–AI interfaces for explainable AI. In order to build effective and efficient interactive human–AI interfaces we have to deal with the question of how to evaluate the quality of explanations given by an explainable AI system. In this paper we introduce our System Causability Scale to measure the quality of explanations. It is based on our notion of Causability (Holzinger et al. in Wiley Interdiscip Rev Data Min Knowl Discov 9(4), 2019) combined with concepts adapted from a widely-accepted usability scale. Springer Berlin Heidelberg 2020-01-21 2020 /pmc/articles/PMC7271052/ /pubmed/32549653 http://dx.doi.org/10.1007/s13218-020-00636-z Text en © The Author(s) 2020 Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
spellingShingle Technical Contribution
Holzinger, Andreas
Carrington, André
Müller, Heimo
Measuring the Quality of Explanations: The System Causability Scale (SCS): Comparing Human and Machine Explanations
title Measuring the Quality of Explanations: The System Causability Scale (SCS): Comparing Human and Machine Explanations
title_full Measuring the Quality of Explanations: The System Causability Scale (SCS): Comparing Human and Machine Explanations
title_fullStr Measuring the Quality of Explanations: The System Causability Scale (SCS): Comparing Human and Machine Explanations
title_full_unstemmed Measuring the Quality of Explanations: The System Causability Scale (SCS): Comparing Human and Machine Explanations
title_short Measuring the Quality of Explanations: The System Causability Scale (SCS): Comparing Human and Machine Explanations
title_sort measuring the quality of explanations: the system causability scale (scs): comparing human and machine explanations
topic Technical Contribution
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7271052/
https://www.ncbi.nlm.nih.gov/pubmed/32549653
http://dx.doi.org/10.1007/s13218-020-00636-z
work_keys_str_mv AT holzingerandreas measuringthequalityofexplanationsthesystemcausabilityscalescscomparinghumanandmachineexplanations
AT carringtonandre measuringthequalityofexplanationsthesystemcausabilityscalescscomparinghumanandmachineexplanations
AT mullerheimo measuringthequalityofexplanationsthesystemcausabilityscalescscomparinghumanandmachineexplanations