Cargando…
Justificatory explanations in machine learning: for increased transparency through documenting how key concepts drive and underpin design and engineering decisions
Given the pervasiveness of AI systems and their potential negative effects on people’s lives (especially among already marginalised groups), it becomes imperative to comprehend what goes on when an AI system generates a result, and based on what reasons, it is achieved. There are consistent technica...
Autores principales: | , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Springer London
2022
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8965536/ https://www.ncbi.nlm.nih.gov/pubmed/35370366 http://dx.doi.org/10.1007/s00146-022-01389-z |
_version_ | 1784678453504638976 |
---|---|
author | Casacuberta, David Guersenzvaig, Ariel Moyano-Fernández, Cristian |
author_facet | Casacuberta, David Guersenzvaig, Ariel Moyano-Fernández, Cristian |
author_sort | Casacuberta, David |
collection | PubMed |
description | Given the pervasiveness of AI systems and their potential negative effects on people’s lives (especially among already marginalised groups), it becomes imperative to comprehend what goes on when an AI system generates a result, and based on what reasons, it is achieved. There are consistent technical efforts for making systems more “explainable” by reducing their opaqueness and increasing their interpretability and explainability. In this paper, we explore an alternative non-technical approach towards explainability that complement existing ones. Leaving aside technical, statistical, or data-related issues, we focus on the very conceptual underpinnings of the design decisions made by developers and other stakeholders during the lifecycle of a machine learning project. For instance, the design and development of an app to track snoring to detect possible health risks presuppose some picture or another of “health”, which is a key notion that conceptually underpins the project. We take it as a premise that these key concepts are necessarily present during design and development, albeit perhaps tacitly. We argue that by providing “justificatory explanations” about how the team understands the relevant key concepts behind its design decisions, interested parties could gain valuable insights and make better sense of the workings and outcomes of systems. Using the concept of “health”, we illustrate how a particular understanding of it might influence decisions during the design and development stages of a machine learning project, and how making this explicit by incorporating it into ex-post explanations might increase the explanatory and justificatory power of these explanations. We posit that a greater conceptual awareness of the key concepts that underpin design and development decisions may be beneficial to any attempt to develop explainability methods. We recommend that “justificatory explanations” are provided as technical documentation. These are declarative statements that contain at its simplest: (1) a high-level account of the understanding of the relevant key concepts a team possess related to a project’s main domain, (2) how these understandings drive decision-making during the life-cycle stages, and (3) it gives reasons (which could be implicit in the account) that the person or persons doing the explanation consider to have plausible justificatory power for the decisions that were made during the project. |
format | Online Article Text |
id | pubmed-8965536 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2022 |
publisher | Springer London |
record_format | MEDLINE/PubMed |
spelling | pubmed-89655362022-03-30 Justificatory explanations in machine learning: for increased transparency through documenting how key concepts drive and underpin design and engineering decisions Casacuberta, David Guersenzvaig, Ariel Moyano-Fernández, Cristian AI Soc Open Forum Given the pervasiveness of AI systems and their potential negative effects on people’s lives (especially among already marginalised groups), it becomes imperative to comprehend what goes on when an AI system generates a result, and based on what reasons, it is achieved. There are consistent technical efforts for making systems more “explainable” by reducing their opaqueness and increasing their interpretability and explainability. In this paper, we explore an alternative non-technical approach towards explainability that complement existing ones. Leaving aside technical, statistical, or data-related issues, we focus on the very conceptual underpinnings of the design decisions made by developers and other stakeholders during the lifecycle of a machine learning project. For instance, the design and development of an app to track snoring to detect possible health risks presuppose some picture or another of “health”, which is a key notion that conceptually underpins the project. We take it as a premise that these key concepts are necessarily present during design and development, albeit perhaps tacitly. We argue that by providing “justificatory explanations” about how the team understands the relevant key concepts behind its design decisions, interested parties could gain valuable insights and make better sense of the workings and outcomes of systems. Using the concept of “health”, we illustrate how a particular understanding of it might influence decisions during the design and development stages of a machine learning project, and how making this explicit by incorporating it into ex-post explanations might increase the explanatory and justificatory power of these explanations. We posit that a greater conceptual awareness of the key concepts that underpin design and development decisions may be beneficial to any attempt to develop explainability methods. We recommend that “justificatory explanations” are provided as technical documentation. These are declarative statements that contain at its simplest: (1) a high-level account of the understanding of the relevant key concepts a team possess related to a project’s main domain, (2) how these understandings drive decision-making during the life-cycle stages, and (3) it gives reasons (which could be implicit in the account) that the person or persons doing the explanation consider to have plausible justificatory power for the decisions that were made during the project. Springer London 2022-03-30 /pmc/articles/PMC8965536/ /pubmed/35370366 http://dx.doi.org/10.1007/s00146-022-01389-z Text en © The Author(s) 2022 https://creativecommons.org/licenses/by/4.0/Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) . |
spellingShingle | Open Forum Casacuberta, David Guersenzvaig, Ariel Moyano-Fernández, Cristian Justificatory explanations in machine learning: for increased transparency through documenting how key concepts drive and underpin design and engineering decisions |
title | Justificatory explanations in machine learning: for increased transparency through documenting how key concepts drive and underpin design and engineering decisions |
title_full | Justificatory explanations in machine learning: for increased transparency through documenting how key concepts drive and underpin design and engineering decisions |
title_fullStr | Justificatory explanations in machine learning: for increased transparency through documenting how key concepts drive and underpin design and engineering decisions |
title_full_unstemmed | Justificatory explanations in machine learning: for increased transparency through documenting how key concepts drive and underpin design and engineering decisions |
title_short | Justificatory explanations in machine learning: for increased transparency through documenting how key concepts drive and underpin design and engineering decisions |
title_sort | justificatory explanations in machine learning: for increased transparency through documenting how key concepts drive and underpin design and engineering decisions |
topic | Open Forum |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8965536/ https://www.ncbi.nlm.nih.gov/pubmed/35370366 http://dx.doi.org/10.1007/s00146-022-01389-z |
work_keys_str_mv | AT casacubertadavid justificatoryexplanationsinmachinelearningforincreasedtransparencythroughdocumentinghowkeyconceptsdriveandunderpindesignandengineeringdecisions AT guersenzvaigariel justificatoryexplanationsinmachinelearningforincreasedtransparencythroughdocumentinghowkeyconceptsdriveandunderpindesignandengineeringdecisions AT moyanofernandezcristian justificatoryexplanationsinmachinelearningforincreasedtransparencythroughdocumentinghowkeyconceptsdriveandunderpindesignandengineeringdecisions |