Cargando…

To trust or not to trust an explanation: using LEAF to evaluate local linear XAI methods

The main objective of eXplainable Artificial Intelligence (XAI) is to provide effective explanations for black-box classifiers. The existing literature lists many desirable properties for explanations to be useful, but there is a scarce consensus on how to quantitatively evaluate explanations in pra...

Descripción completa

Detalles Bibliográficos
Autores principales: Amparore, Elvio, Perotti, Alan, Bajardi, Paolo
Formato: Online Artículo Texto
Lenguaje:English
Publicado: PeerJ Inc. 2021
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8056245/
https://www.ncbi.nlm.nih.gov/pubmed/33977131
http://dx.doi.org/10.7717/peerj-cs.479
_version_ 1783680613148000256
author Amparore, Elvio
Perotti, Alan
Bajardi, Paolo
author_facet Amparore, Elvio
Perotti, Alan
Bajardi, Paolo
author_sort Amparore, Elvio
collection PubMed
description The main objective of eXplainable Artificial Intelligence (XAI) is to provide effective explanations for black-box classifiers. The existing literature lists many desirable properties for explanations to be useful, but there is a scarce consensus on how to quantitatively evaluate explanations in practice. Moreover, explanations are typically used only to inspect black-box models, and the proactive use of explanations as a decision support is generally overlooked. Among the many approaches to XAI, a widely adopted paradigm is Local Linear Explanations—with LIME and SHAP emerging as state-of-the-art methods. We show that these methods are plagued by many defects including unstable explanations, divergence of actual implementations from the promised theoretical properties, and explanations for the wrong label. This highlights the need to have standard and unbiased evaluation procedures for Local Linear Explanations in the XAI field. In this paper we address the problem of identifying a clear and unambiguous set of metrics for the evaluation of Local Linear Explanations. This set includes both existing and novel metrics defined specifically for this class of explanations. All metrics have been included in an open Python framework, named LEAF. The purpose of LEAF is to provide a reference for end users to evaluate explanations in a standardised and unbiased way, and to guide researchers towards developing improved explainable techniques.
format Online
Article
Text
id pubmed-8056245
institution National Center for Biotechnology Information
language English
publishDate 2021
publisher PeerJ Inc.
record_format MEDLINE/PubMed
spelling pubmed-80562452021-05-10 To trust or not to trust an explanation: using LEAF to evaluate local linear XAI methods Amparore, Elvio Perotti, Alan Bajardi, Paolo PeerJ Comput Sci Artificial Intelligence The main objective of eXplainable Artificial Intelligence (XAI) is to provide effective explanations for black-box classifiers. The existing literature lists many desirable properties for explanations to be useful, but there is a scarce consensus on how to quantitatively evaluate explanations in practice. Moreover, explanations are typically used only to inspect black-box models, and the proactive use of explanations as a decision support is generally overlooked. Among the many approaches to XAI, a widely adopted paradigm is Local Linear Explanations—with LIME and SHAP emerging as state-of-the-art methods. We show that these methods are plagued by many defects including unstable explanations, divergence of actual implementations from the promised theoretical properties, and explanations for the wrong label. This highlights the need to have standard and unbiased evaluation procedures for Local Linear Explanations in the XAI field. In this paper we address the problem of identifying a clear and unambiguous set of metrics for the evaluation of Local Linear Explanations. This set includes both existing and novel metrics defined specifically for this class of explanations. All metrics have been included in an open Python framework, named LEAF. The purpose of LEAF is to provide a reference for end users to evaluate explanations in a standardised and unbiased way, and to guide researchers towards developing improved explainable techniques. PeerJ Inc. 2021-04-16 /pmc/articles/PMC8056245/ /pubmed/33977131 http://dx.doi.org/10.7717/peerj-cs.479 Text en ©2021 Amparore et al. https://creativecommons.org/licenses/by/4.0/This is an open access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/) , which permits unrestricted use, distribution, reproduction and adaptation in any medium and for any purpose provided that it is properly attributed. For attribution, the original author(s), title, publication source (PeerJ Computer Science) and either DOI or URL of the article must be cited.
spellingShingle Artificial Intelligence
Amparore, Elvio
Perotti, Alan
Bajardi, Paolo
To trust or not to trust an explanation: using LEAF to evaluate local linear XAI methods
title To trust or not to trust an explanation: using LEAF to evaluate local linear XAI methods
title_full To trust or not to trust an explanation: using LEAF to evaluate local linear XAI methods
title_fullStr To trust or not to trust an explanation: using LEAF to evaluate local linear XAI methods
title_full_unstemmed To trust or not to trust an explanation: using LEAF to evaluate local linear XAI methods
title_short To trust or not to trust an explanation: using LEAF to evaluate local linear XAI methods
title_sort to trust or not to trust an explanation: using leaf to evaluate local linear xai methods
topic Artificial Intelligence
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8056245/
https://www.ncbi.nlm.nih.gov/pubmed/33977131
http://dx.doi.org/10.7717/peerj-cs.479
work_keys_str_mv AT amparoreelvio totrustornottotrustanexplanationusingleaftoevaluatelocallinearxaimethods
AT perottialan totrustornottotrustanexplanationusingleaftoevaluatelocallinearxaimethods
AT bajardipaolo totrustornottotrustanexplanationusingleaftoevaluatelocallinearxaimethods