Cargando…
Transparency as design publicity: explaining and justifying inscrutable algorithms
In this paper we argue that transparency of machine learning algorithms, just as explanation, can be defined at different levels of abstraction. We criticize recent attempts to identify the explanation of black box algorithms with making their decisions (post-hoc) interpretable, focusing our discuss...
Autores principales: | , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Springer Netherlands
2020
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8626372/ https://www.ncbi.nlm.nih.gov/pubmed/34867077 http://dx.doi.org/10.1007/s10676-020-09564-w |
_version_ | 1784606642349801472 |
---|---|
author | Loi, Michele Ferrario, Andrea Viganò, Eleonora |
author_facet | Loi, Michele Ferrario, Andrea Viganò, Eleonora |
author_sort | Loi, Michele |
collection | PubMed |
description | In this paper we argue that transparency of machine learning algorithms, just as explanation, can be defined at different levels of abstraction. We criticize recent attempts to identify the explanation of black box algorithms with making their decisions (post-hoc) interpretable, focusing our discussion on counterfactual explanations. These approaches to explanation simplify the real nature of the black boxes and risk misleading the public about the normative features of a model. We propose a new form of algorithmic transparency, that consists in explaining algorithms as an intentional product, that serves a particular goal, or multiple goals (Daniel Dennet’s design stance) in a given domain of applicability, and that provides a measure of the extent to which such a goal is achieved, and evidence about the way that measure has been reached. We call such idea of algorithmic transparency “design publicity.” We argue that design publicity can be more easily linked with the justification of the use and of the design of the algorithm, and of each individual decision following from it. In comparison to post-hoc explanations of individual algorithmic decisions, design publicity meets a different demand (the demand for impersonal justification) of the explainee. Finally, we argue that when models that pursue justifiable goals (which may include fairness as avoidance of bias towards specific groups) to a justifiable degree are used consistently, the resulting decisions are all justified even if some of them are (unavoidably) based on incorrect predictions. For this argument, we rely on John Rawls’s idea of procedural justice applied to algorithms conceived as institutions. |
format | Online Article Text |
id | pubmed-8626372 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2020 |
publisher | Springer Netherlands |
record_format | MEDLINE/PubMed |
spelling | pubmed-86263722021-12-01 Transparency as design publicity: explaining and justifying inscrutable algorithms Loi, Michele Ferrario, Andrea Viganò, Eleonora Ethics Inf Technol Original Paper In this paper we argue that transparency of machine learning algorithms, just as explanation, can be defined at different levels of abstraction. We criticize recent attempts to identify the explanation of black box algorithms with making their decisions (post-hoc) interpretable, focusing our discussion on counterfactual explanations. These approaches to explanation simplify the real nature of the black boxes and risk misleading the public about the normative features of a model. We propose a new form of algorithmic transparency, that consists in explaining algorithms as an intentional product, that serves a particular goal, or multiple goals (Daniel Dennet’s design stance) in a given domain of applicability, and that provides a measure of the extent to which such a goal is achieved, and evidence about the way that measure has been reached. We call such idea of algorithmic transparency “design publicity.” We argue that design publicity can be more easily linked with the justification of the use and of the design of the algorithm, and of each individual decision following from it. In comparison to post-hoc explanations of individual algorithmic decisions, design publicity meets a different demand (the demand for impersonal justification) of the explainee. Finally, we argue that when models that pursue justifiable goals (which may include fairness as avoidance of bias towards specific groups) to a justifiable degree are used consistently, the resulting decisions are all justified even if some of them are (unavoidably) based on incorrect predictions. For this argument, we rely on John Rawls’s idea of procedural justice applied to algorithms conceived as institutions. Springer Netherlands 2020-10-20 2021 /pmc/articles/PMC8626372/ /pubmed/34867077 http://dx.doi.org/10.1007/s10676-020-09564-w Text en © The Author(s) 2020 https://creativecommons.org/licenses/by/4.0/Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) . |
spellingShingle | Original Paper Loi, Michele Ferrario, Andrea Viganò, Eleonora Transparency as design publicity: explaining and justifying inscrutable algorithms |
title | Transparency as design publicity: explaining and justifying inscrutable algorithms |
title_full | Transparency as design publicity: explaining and justifying inscrutable algorithms |
title_fullStr | Transparency as design publicity: explaining and justifying inscrutable algorithms |
title_full_unstemmed | Transparency as design publicity: explaining and justifying inscrutable algorithms |
title_short | Transparency as design publicity: explaining and justifying inscrutable algorithms |
title_sort | transparency as design publicity: explaining and justifying inscrutable algorithms |
topic | Original Paper |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8626372/ https://www.ncbi.nlm.nih.gov/pubmed/34867077 http://dx.doi.org/10.1007/s10676-020-09564-w |
work_keys_str_mv | AT loimichele transparencyasdesignpublicityexplainingandjustifyinginscrutablealgorithms AT ferrarioandrea transparencyasdesignpublicityexplainingandjustifyinginscrutablealgorithms AT viganoeleonora transparencyasdesignpublicityexplainingandjustifyinginscrutablealgorithms |