Cargando…

Achieving descriptive accuracy in explanations via argumentation: The case of probabilistic classifiers

The pursuit of trust in and fairness of AI systems in order to enable human-centric goals has been gathering pace of late, often supported by the use of explanations for the outputs of these systems. Several properties of explanations have been highlighted as critical for achieving trustworthy and f...

Descripción completa

Detalles Bibliográficos
Autores principales: Albini, Emanuele, Rago, Antonio, Baroni, Pietro, Toni, Francesca
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Frontiers Media S.A. 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10117939/
https://www.ncbi.nlm.nih.gov/pubmed/37091304
http://dx.doi.org/10.3389/frai.2023.1099407
_version_ 1785028698612695040
author Albini, Emanuele
Rago, Antonio
Baroni, Pietro
Toni, Francesca
author_facet Albini, Emanuele
Rago, Antonio
Baroni, Pietro
Toni, Francesca
author_sort Albini, Emanuele
collection PubMed
description The pursuit of trust in and fairness of AI systems in order to enable human-centric goals has been gathering pace of late, often supported by the use of explanations for the outputs of these systems. Several properties of explanations have been highlighted as critical for achieving trustworthy and fair AI systems, but one that has thus far been overlooked is that of descriptive accuracy (DA), i.e., that the explanation contents are in correspondence with the internal working of the explained system. Indeed, the violation of this core property would lead to the paradoxical situation of systems producing explanations which are not suitably related to how the system actually works: clearly this may hinder user trust. Further, if explanations violate DA then they can be deceitful, resulting in an unfair behavior toward the users. Crucial as the DA property appears to be, it has been somehow overlooked in the XAI literature to date. To address this problem, we consider the questions of formalizing DA and of analyzing its satisfaction by explanation methods. We provide formal definitions of naive, structural and dialectical DA, using the family of probabilistic classifiers as the context for our analysis. We evaluate the satisfaction of our given notions of DA by several explanation methods, amounting to two popular feature-attribution methods from the literature, variants thereof and a novel form of explanation that we propose. We conduct experiments with a varied selection of concrete probabilistic classifiers and highlight the importance, with a user study, of our most demanding notion of dialectical DA, which our novel method satisfies by design and others may violate. We thus demonstrate how DA could be a critical component in achieving trustworthy and fair systems, in line with the principles of human-centric AI.
format Online
Article
Text
id pubmed-10117939
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher Frontiers Media S.A.
record_format MEDLINE/PubMed
spelling pubmed-101179392023-04-21 Achieving descriptive accuracy in explanations via argumentation: The case of probabilistic classifiers Albini, Emanuele Rago, Antonio Baroni, Pietro Toni, Francesca Front Artif Intell Artificial Intelligence The pursuit of trust in and fairness of AI systems in order to enable human-centric goals has been gathering pace of late, often supported by the use of explanations for the outputs of these systems. Several properties of explanations have been highlighted as critical for achieving trustworthy and fair AI systems, but one that has thus far been overlooked is that of descriptive accuracy (DA), i.e., that the explanation contents are in correspondence with the internal working of the explained system. Indeed, the violation of this core property would lead to the paradoxical situation of systems producing explanations which are not suitably related to how the system actually works: clearly this may hinder user trust. Further, if explanations violate DA then they can be deceitful, resulting in an unfair behavior toward the users. Crucial as the DA property appears to be, it has been somehow overlooked in the XAI literature to date. To address this problem, we consider the questions of formalizing DA and of analyzing its satisfaction by explanation methods. We provide formal definitions of naive, structural and dialectical DA, using the family of probabilistic classifiers as the context for our analysis. We evaluate the satisfaction of our given notions of DA by several explanation methods, amounting to two popular feature-attribution methods from the literature, variants thereof and a novel form of explanation that we propose. We conduct experiments with a varied selection of concrete probabilistic classifiers and highlight the importance, with a user study, of our most demanding notion of dialectical DA, which our novel method satisfies by design and others may violate. We thus demonstrate how DA could be a critical component in achieving trustworthy and fair systems, in line with the principles of human-centric AI. Frontiers Media S.A. 2023-04-06 /pmc/articles/PMC10117939/ /pubmed/37091304 http://dx.doi.org/10.3389/frai.2023.1099407 Text en Copyright © 2023 Albini, Rago, Baroni and Toni. https://creativecommons.org/licenses/by/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
spellingShingle Artificial Intelligence
Albini, Emanuele
Rago, Antonio
Baroni, Pietro
Toni, Francesca
Achieving descriptive accuracy in explanations via argumentation: The case of probabilistic classifiers
title Achieving descriptive accuracy in explanations via argumentation: The case of probabilistic classifiers
title_full Achieving descriptive accuracy in explanations via argumentation: The case of probabilistic classifiers
title_fullStr Achieving descriptive accuracy in explanations via argumentation: The case of probabilistic classifiers
title_full_unstemmed Achieving descriptive accuracy in explanations via argumentation: The case of probabilistic classifiers
title_short Achieving descriptive accuracy in explanations via argumentation: The case of probabilistic classifiers
title_sort achieving descriptive accuracy in explanations via argumentation: the case of probabilistic classifiers
topic Artificial Intelligence
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10117939/
https://www.ncbi.nlm.nih.gov/pubmed/37091304
http://dx.doi.org/10.3389/frai.2023.1099407
work_keys_str_mv AT albiniemanuele achievingdescriptiveaccuracyinexplanationsviaargumentationthecaseofprobabilisticclassifiers
AT ragoantonio achievingdescriptiveaccuracyinexplanationsviaargumentationthecaseofprobabilisticclassifiers
AT baronipietro achievingdescriptiveaccuracyinexplanationsviaargumentationthecaseofprobabilisticclassifiers
AT tonifrancesca achievingdescriptiveaccuracyinexplanationsviaargumentationthecaseofprobabilisticclassifiers