Cargando…

The grammar of interactive explanatory model analysis

The growing need for in-depth analysis of predictive models leads to a series of new methods for explaining their local and global properties. Which of these methods is the best? It turns out that this is an ill-posed question. One cannot sufficiently explain a black-box machine learning model using...

Descripción completa

Detalles Bibliográficos
Autores principales: Baniecki, Hubert, Parzych, Dariusz, Biecek, Przemyslaw
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Springer US 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9926444/
https://www.ncbi.nlm.nih.gov/pubmed/36818741
http://dx.doi.org/10.1007/s10618-023-00924-w
_version_ 1784888282427949056
author Baniecki, Hubert
Parzych, Dariusz
Biecek, Przemyslaw
author_facet Baniecki, Hubert
Parzych, Dariusz
Biecek, Przemyslaw
author_sort Baniecki, Hubert
collection PubMed
description The growing need for in-depth analysis of predictive models leads to a series of new methods for explaining their local and global properties. Which of these methods is the best? It turns out that this is an ill-posed question. One cannot sufficiently explain a black-box machine learning model using a single method that gives only one perspective. Isolated explanations are prone to misunderstanding, leading to wrong or simplistic reasoning. This problem is known as the Rashomon effect and refers to diverse, even contradictory, interpretations of the same phenomenon. Surprisingly, most methods developed for explainable and responsible machine learning focus on a single-aspect of the model behavior. In contrast, we showcase the problem of explainability as an interactive and sequential analysis of a model. This paper proposes how different Explanatory Model Analysis (EMA) methods complement each other and discusses why it is essential to juxtapose them. The introduced process of Interactive EMA (IEMA) derives from the algorithmic side of explainable machine learning and aims to embrace ideas developed in cognitive sciences. We formalize the grammar of IEMA to describe human-model interaction. It is implemented in a widely used human-centered open-source software framework that adopts interactivity, customizability and automation as its main traits. We conduct a user study to evaluate the usefulness of IEMA, which indicates that an interactive sequential analysis of a model may increase the accuracy and confidence of human decision making.
format Online
Article
Text
id pubmed-9926444
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher Springer US
record_format MEDLINE/PubMed
spelling pubmed-99264442023-02-14 The grammar of interactive explanatory model analysis Baniecki, Hubert Parzych, Dariusz Biecek, Przemyslaw Data Min Knowl Discov Article The growing need for in-depth analysis of predictive models leads to a series of new methods for explaining their local and global properties. Which of these methods is the best? It turns out that this is an ill-posed question. One cannot sufficiently explain a black-box machine learning model using a single method that gives only one perspective. Isolated explanations are prone to misunderstanding, leading to wrong or simplistic reasoning. This problem is known as the Rashomon effect and refers to diverse, even contradictory, interpretations of the same phenomenon. Surprisingly, most methods developed for explainable and responsible machine learning focus on a single-aspect of the model behavior. In contrast, we showcase the problem of explainability as an interactive and sequential analysis of a model. This paper proposes how different Explanatory Model Analysis (EMA) methods complement each other and discusses why it is essential to juxtapose them. The introduced process of Interactive EMA (IEMA) derives from the algorithmic side of explainable machine learning and aims to embrace ideas developed in cognitive sciences. We formalize the grammar of IEMA to describe human-model interaction. It is implemented in a widely used human-centered open-source software framework that adopts interactivity, customizability and automation as its main traits. We conduct a user study to evaluate the usefulness of IEMA, which indicates that an interactive sequential analysis of a model may increase the accuracy and confidence of human decision making. Springer US 2023-02-14 /pmc/articles/PMC9926444/ /pubmed/36818741 http://dx.doi.org/10.1007/s10618-023-00924-w Text en © The Author(s) 2023 https://creativecommons.org/licenses/by/4.0/Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) .
spellingShingle Article
Baniecki, Hubert
Parzych, Dariusz
Biecek, Przemyslaw
The grammar of interactive explanatory model analysis
title The grammar of interactive explanatory model analysis
title_full The grammar of interactive explanatory model analysis
title_fullStr The grammar of interactive explanatory model analysis
title_full_unstemmed The grammar of interactive explanatory model analysis
title_short The grammar of interactive explanatory model analysis
title_sort grammar of interactive explanatory model analysis
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9926444/
https://www.ncbi.nlm.nih.gov/pubmed/36818741
http://dx.doi.org/10.1007/s10618-023-00924-w
work_keys_str_mv AT banieckihubert thegrammarofinteractiveexplanatorymodelanalysis
AT parzychdariusz thegrammarofinteractiveexplanatorymodelanalysis
AT biecekprzemyslaw thegrammarofinteractiveexplanatorymodelanalysis
AT banieckihubert grammarofinteractiveexplanatorymodelanalysis
AT parzychdariusz grammarofinteractiveexplanatorymodelanalysis
AT biecekprzemyslaw grammarofinteractiveexplanatorymodelanalysis