Cargando…
Explainable AI as evidence of fair decisions
This paper will propose that explanations are valuable to those impacted by a model's decisions (model patients) to the extent that they provide evidence that a past adverse decision was unfair. Under this proposal, we should favor models and explainability methods which generate counterfactual...
Autor principal: | |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Frontiers Media S.A.
2023
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9971226/ https://www.ncbi.nlm.nih.gov/pubmed/36865358 http://dx.doi.org/10.3389/fpsyg.2023.1069426 |
_version_ | 1784898066738839552 |
---|---|
author | Leben, Derek |
author_facet | Leben, Derek |
author_sort | Leben, Derek |
collection | PubMed |
description | This paper will propose that explanations are valuable to those impacted by a model's decisions (model patients) to the extent that they provide evidence that a past adverse decision was unfair. Under this proposal, we should favor models and explainability methods which generate counterfactuals of two types. The first type of counterfactual is positive evidence of fairness: a set of states under the control of the patient which (if changed) would have led to a beneficial decision. The second type of counterfactual is negative evidence of fairness: a set of irrelevant group or behavioral attributes which (if changed) would not have led to a beneficial decision. Each of these counterfactual statements is related to fairness, under the Liberal Egalitarian idea that treating one person differently than another is justified only on the basis of features which were plausibly under each person's control. Other aspects of an explanation, such as feature importance and actionable recourse, are not essential under this view, and need not be a goal of explainable AI. |
format | Online Article Text |
id | pubmed-9971226 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2023 |
publisher | Frontiers Media S.A. |
record_format | MEDLINE/PubMed |
spelling | pubmed-99712262023-03-01 Explainable AI as evidence of fair decisions Leben, Derek Front Psychol Psychology This paper will propose that explanations are valuable to those impacted by a model's decisions (model patients) to the extent that they provide evidence that a past adverse decision was unfair. Under this proposal, we should favor models and explainability methods which generate counterfactuals of two types. The first type of counterfactual is positive evidence of fairness: a set of states under the control of the patient which (if changed) would have led to a beneficial decision. The second type of counterfactual is negative evidence of fairness: a set of irrelevant group or behavioral attributes which (if changed) would not have led to a beneficial decision. Each of these counterfactual statements is related to fairness, under the Liberal Egalitarian idea that treating one person differently than another is justified only on the basis of features which were plausibly under each person's control. Other aspects of an explanation, such as feature importance and actionable recourse, are not essential under this view, and need not be a goal of explainable AI. Frontiers Media S.A. 2023-02-14 /pmc/articles/PMC9971226/ /pubmed/36865358 http://dx.doi.org/10.3389/fpsyg.2023.1069426 Text en Copyright © 2023 Leben. https://creativecommons.org/licenses/by/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms. |
spellingShingle | Psychology Leben, Derek Explainable AI as evidence of fair decisions |
title | Explainable AI as evidence of fair decisions |
title_full | Explainable AI as evidence of fair decisions |
title_fullStr | Explainable AI as evidence of fair decisions |
title_full_unstemmed | Explainable AI as evidence of fair decisions |
title_short | Explainable AI as evidence of fair decisions |
title_sort | explainable ai as evidence of fair decisions |
topic | Psychology |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9971226/ https://www.ncbi.nlm.nih.gov/pubmed/36865358 http://dx.doi.org/10.3389/fpsyg.2023.1069426 |
work_keys_str_mv | AT lebenderek explainableaiasevidenceoffairdecisions |