Cargando…

Justificatory explanations: a step beyond explainability in machine learning

AI systems may have many potential negative effects, so understanding how they generate results is important. AI explainability is a crucial but highly technical field that might be inaccessible to many experts in (public) health. We present a non-technical approach to the issue and is focused on th...

Descripción completa

Detalles Bibliográficos
Autores principales: Guersenzvaig, A, Casacuberta, D
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Oxford University Press 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10595304/
http://dx.doi.org/10.1093/eurpub/ckad160.873
_version_ 1785124838184058880
author Guersenzvaig, A
Casacuberta, D
author_facet Guersenzvaig, A
Casacuberta, D
author_sort Guersenzvaig, A
collection PubMed
description AI systems may have many potential negative effects, so understanding how they generate results is important. AI explainability is a crucial but highly technical field that might be inaccessible to many experts in (public) health. We present a non-technical approach to the issue and is focused on the conceptual foundations for decisions made by developers and stakeholders during the system's lifecycle. To illustrate, imagine an app that tracks snoring to offer health recommendations. A classical explainability approach would be to evaluate the quality of the training dataset and the model itself. We argue that if we want to understand the system, we also need to look at the notion of “health” as it is its basic conceptual underpinning. We believe that these conceptual foundations as well as rich descriptions of goals and purposes that are pursued can be integrated into what we call ‘justificatory explanations'. These would illustrate how key concepts influence design and development decisions offering thus valuable insights into the workings and outcomes of the explained system. ‘Justificatory explanations’ are declarative statements written in plain language that provide a high-level account of the team's understanding of the relevant key concepts related to the project's main domain and how these understandings drive decision-making during the lifecycle stages. We propose that ‘justificatory explanations’ should be incorporated into the technical documentation that usually accompanies the release or deployment of a system. In short, ‘justificatory explanations’ complement other efforts around explainability by highlighting the reasons the person or persons designing, developing, selling and deploying the system consider to have plausible justificatory power for the decisions that were made during the project. This increased conceptual awareness may be beneficial to all, even to those without a background in mathematics, and data and computer science. KEY MESSAGES: • Explainability in AI should go beyond the data and the model to integrate design decisions. • Justificatory reasons about the system should be provided in plain language.
format Online
Article
Text
id pubmed-10595304
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher Oxford University Press
record_format MEDLINE/PubMed
spelling pubmed-105953042023-10-25 Justificatory explanations: a step beyond explainability in machine learning Guersenzvaig, A Casacuberta, D Eur J Public Health Poster Walks AI systems may have many potential negative effects, so understanding how they generate results is important. AI explainability is a crucial but highly technical field that might be inaccessible to many experts in (public) health. We present a non-technical approach to the issue and is focused on the conceptual foundations for decisions made by developers and stakeholders during the system's lifecycle. To illustrate, imagine an app that tracks snoring to offer health recommendations. A classical explainability approach would be to evaluate the quality of the training dataset and the model itself. We argue that if we want to understand the system, we also need to look at the notion of “health” as it is its basic conceptual underpinning. We believe that these conceptual foundations as well as rich descriptions of goals and purposes that are pursued can be integrated into what we call ‘justificatory explanations'. These would illustrate how key concepts influence design and development decisions offering thus valuable insights into the workings and outcomes of the explained system. ‘Justificatory explanations’ are declarative statements written in plain language that provide a high-level account of the team's understanding of the relevant key concepts related to the project's main domain and how these understandings drive decision-making during the lifecycle stages. We propose that ‘justificatory explanations’ should be incorporated into the technical documentation that usually accompanies the release or deployment of a system. In short, ‘justificatory explanations’ complement other efforts around explainability by highlighting the reasons the person or persons designing, developing, selling and deploying the system consider to have plausible justificatory power for the decisions that were made during the project. This increased conceptual awareness may be beneficial to all, even to those without a background in mathematics, and data and computer science. KEY MESSAGES: • Explainability in AI should go beyond the data and the model to integrate design decisions. • Justificatory reasons about the system should be provided in plain language. Oxford University Press 2023-10-24 /pmc/articles/PMC10595304/ http://dx.doi.org/10.1093/eurpub/ckad160.873 Text en © The Author(s) 2023. Published by Oxford University Press on behalf of the European Public Health Association. https://creativecommons.org/licenses/by-nc/4.0/This is an Open Access article distributed under the terms of the Creative Commons Attribution-NonCommercial License (https://creativecommons.org/licenses/by-nc/4.0/), which permits non-commercial re-use, distribution, and reproduction in any medium, provided the original work is properly cited. For commercial re-use, please contact journals.permissions@oup.com
spellingShingle Poster Walks
Guersenzvaig, A
Casacuberta, D
Justificatory explanations: a step beyond explainability in machine learning
title Justificatory explanations: a step beyond explainability in machine learning
title_full Justificatory explanations: a step beyond explainability in machine learning
title_fullStr Justificatory explanations: a step beyond explainability in machine learning
title_full_unstemmed Justificatory explanations: a step beyond explainability in machine learning
title_short Justificatory explanations: a step beyond explainability in machine learning
title_sort justificatory explanations: a step beyond explainability in machine learning
topic Poster Walks
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10595304/
http://dx.doi.org/10.1093/eurpub/ckad160.873
work_keys_str_mv AT guersenzvaiga justificatoryexplanationsastepbeyondexplainabilityinmachinelearning
AT casacubertad justificatoryexplanationsastepbeyondexplainabilityinmachinelearning