Cargando…
PreCoF: counterfactual explanations for fairness
This paper studies how counterfactual explanations can be used to assess the fairness of a model. Using machine learning for high-stakes decisions is a threat to fairness as these models can amplify bias present in the dataset, and there is no consensus on a universal metric to detect this. The appr...
Autores principales: | , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Springer US
2023
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10047477/ https://www.ncbi.nlm.nih.gov/pubmed/37363047 http://dx.doi.org/10.1007/s10994-023-06319-8 |
_version_ | 1785013933589921792 |
---|---|
author | Goethals, Sofie Martens, David Calders, Toon |
author_facet | Goethals, Sofie Martens, David Calders, Toon |
author_sort | Goethals, Sofie |
collection | PubMed |
description | This paper studies how counterfactual explanations can be used to assess the fairness of a model. Using machine learning for high-stakes decisions is a threat to fairness as these models can amplify bias present in the dataset, and there is no consensus on a universal metric to detect this. The appropriate metric and method to tackle the bias in a dataset will be case-dependent, and it requires insight into the nature of the bias first. We aim to provide this insight by integrating explainable AI (XAI) research with the fairness domain. More specifically, apart from being able to use (Predictive) Counterfactual Explanations to detect explicit bias when the model is directly using the sensitive attribute, we show that it can also be used to detect implicit bias when the model does not use the sensitive attribute directly but does use other correlated attributes leading to a substantial disadvantage for a protected group. We call this metric PreCoF, or Predictive Counterfactual Fairness. Our experimental results show that our metric succeeds in detecting occurrences of implicit bias in the model by assessing which attributes are more present in the explanations of the protected group compared to the unprotected group. These results could help policymakers decide on whether this discrimination is justified or not. |
format | Online Article Text |
id | pubmed-10047477 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2023 |
publisher | Springer US |
record_format | MEDLINE/PubMed |
spelling | pubmed-100474772023-03-29 PreCoF: counterfactual explanations for fairness Goethals, Sofie Martens, David Calders, Toon Mach Learn Article This paper studies how counterfactual explanations can be used to assess the fairness of a model. Using machine learning for high-stakes decisions is a threat to fairness as these models can amplify bias present in the dataset, and there is no consensus on a universal metric to detect this. The appropriate metric and method to tackle the bias in a dataset will be case-dependent, and it requires insight into the nature of the bias first. We aim to provide this insight by integrating explainable AI (XAI) research with the fairness domain. More specifically, apart from being able to use (Predictive) Counterfactual Explanations to detect explicit bias when the model is directly using the sensitive attribute, we show that it can also be used to detect implicit bias when the model does not use the sensitive attribute directly but does use other correlated attributes leading to a substantial disadvantage for a protected group. We call this metric PreCoF, or Predictive Counterfactual Fairness. Our experimental results show that our metric succeeds in detecting occurrences of implicit bias in the model by assessing which attributes are more present in the explanations of the protected group compared to the unprotected group. These results could help policymakers decide on whether this discrimination is justified or not. Springer US 2023-03-28 /pmc/articles/PMC10047477/ /pubmed/37363047 http://dx.doi.org/10.1007/s10994-023-06319-8 Text en © The Author(s), under exclusive licence to Springer Science+Business Media LLC, part of Springer Nature 2023, Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law. This article is made available via the PMC Open Access Subset for unrestricted research re-use and secondary analysis in any form or by any means with acknowledgement of the original source. These permissions are granted for the duration of the World Health Organization (WHO) declaration of COVID-19 as a global pandemic. |
spellingShingle | Article Goethals, Sofie Martens, David Calders, Toon PreCoF: counterfactual explanations for fairness |
title | PreCoF: counterfactual explanations for fairness |
title_full | PreCoF: counterfactual explanations for fairness |
title_fullStr | PreCoF: counterfactual explanations for fairness |
title_full_unstemmed | PreCoF: counterfactual explanations for fairness |
title_short | PreCoF: counterfactual explanations for fairness |
title_sort | precof: counterfactual explanations for fairness |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10047477/ https://www.ncbi.nlm.nih.gov/pubmed/37363047 http://dx.doi.org/10.1007/s10994-023-06319-8 |
work_keys_str_mv | AT goethalssofie precofcounterfactualexplanationsforfairness AT martensdavid precofcounterfactualexplanationsforfairness AT calderstoon precofcounterfactualexplanationsforfairness |