Cargando…

As if sand were stone. New concepts and metrics to probe the ground on which to build trustable AI

BACKGROUND: We focus on the importance of interpreting the quality of the labeling used as the input of predictive models to understand the reliability of their output in support of human decision-making, especially in critical domains, such as medicine. METHODS: Accordingly, we propose a framework...

Descripción completa

Detalles Bibliográficos
Autores principales: Cabitza, Federico, Campagner, Andrea, Sconfienza, Luca Maria
Formato: Online Artículo Texto
Lenguaje:English
Publicado: BioMed Central 2020
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7488864/
https://www.ncbi.nlm.nih.gov/pubmed/32917183
http://dx.doi.org/10.1186/s12911-020-01224-9
_version_ 1783581780253605888
author Cabitza, Federico
Campagner, Andrea
Sconfienza, Luca Maria
author_facet Cabitza, Federico
Campagner, Andrea
Sconfienza, Luca Maria
author_sort Cabitza, Federico
collection PubMed
description BACKGROUND: We focus on the importance of interpreting the quality of the labeling used as the input of predictive models to understand the reliability of their output in support of human decision-making, especially in critical domains, such as medicine. METHODS: Accordingly, we propose a framework distinguishing the reference labeling (or Gold Standard) from the set of annotations from which it is usually derived (the Diamond Standard). We define a set of quality dimensions and related metrics: representativeness (are the available data representative of its reference population?); reliability (do the raters agree with each other in their ratings?); and accuracy (are the raters’ annotations a true representation?). The metrics for these dimensions are, respectively, the degree of correspondence, Ψ, the degree of weighted concordance ϱ, and the degree of fineness, Φ. We apply and evaluate these metrics in a diagnostic user study involving 13 radiologists. RESULTS: We evaluate Ψ against hypothesis-testing techniques, highlighting that our metrics can better evaluate distribution similarity in high-dimensional spaces. We discuss how Ψ could be used to assess the reliability of new predictions or for train-test selection. We report the value of ϱ for our case study and compare it with traditional reliability metrics, highlighting both their theoretical properties and the reasons that they differ. Then, we report the degree of fineness as an estimate of the accuracy of the collected annotations and discuss the relationship between this latter degree and the degree of weighted concordance, which we find to be moderately but significantly correlated. Finally, we discuss the implications of the proposed dimensions and metrics with respect to the context of Explainable Artificial Intelligence (XAI). CONCLUSION: We propose different dimensions and related metrics to assess the quality of the datasets used to build predictive models and Medical Artificial Intelligence (MAI). We argue that the proposed metrics are feasible for application in real-world settings for the continuous development of trustable and interpretable MAI systems.
format Online
Article
Text
id pubmed-7488864
institution National Center for Biotechnology Information
language English
publishDate 2020
publisher BioMed Central
record_format MEDLINE/PubMed
spelling pubmed-74888642020-09-16 As if sand were stone. New concepts and metrics to probe the ground on which to build trustable AI Cabitza, Federico Campagner, Andrea Sconfienza, Luca Maria BMC Med Inform Decis Mak Research Article BACKGROUND: We focus on the importance of interpreting the quality of the labeling used as the input of predictive models to understand the reliability of their output in support of human decision-making, especially in critical domains, such as medicine. METHODS: Accordingly, we propose a framework distinguishing the reference labeling (or Gold Standard) from the set of annotations from which it is usually derived (the Diamond Standard). We define a set of quality dimensions and related metrics: representativeness (are the available data representative of its reference population?); reliability (do the raters agree with each other in their ratings?); and accuracy (are the raters’ annotations a true representation?). The metrics for these dimensions are, respectively, the degree of correspondence, Ψ, the degree of weighted concordance ϱ, and the degree of fineness, Φ. We apply and evaluate these metrics in a diagnostic user study involving 13 radiologists. RESULTS: We evaluate Ψ against hypothesis-testing techniques, highlighting that our metrics can better evaluate distribution similarity in high-dimensional spaces. We discuss how Ψ could be used to assess the reliability of new predictions or for train-test selection. We report the value of ϱ for our case study and compare it with traditional reliability metrics, highlighting both their theoretical properties and the reasons that they differ. Then, we report the degree of fineness as an estimate of the accuracy of the collected annotations and discuss the relationship between this latter degree and the degree of weighted concordance, which we find to be moderately but significantly correlated. Finally, we discuss the implications of the proposed dimensions and metrics with respect to the context of Explainable Artificial Intelligence (XAI). CONCLUSION: We propose different dimensions and related metrics to assess the quality of the datasets used to build predictive models and Medical Artificial Intelligence (MAI). We argue that the proposed metrics are feasible for application in real-world settings for the continuous development of trustable and interpretable MAI systems. BioMed Central 2020-09-11 /pmc/articles/PMC7488864/ /pubmed/32917183 http://dx.doi.org/10.1186/s12911-020-01224-9 Text en © The Author(s) 2020 Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.
spellingShingle Research Article
Cabitza, Federico
Campagner, Andrea
Sconfienza, Luca Maria
As if sand were stone. New concepts and metrics to probe the ground on which to build trustable AI
title As if sand were stone. New concepts and metrics to probe the ground on which to build trustable AI
title_full As if sand were stone. New concepts and metrics to probe the ground on which to build trustable AI
title_fullStr As if sand were stone. New concepts and metrics to probe the ground on which to build trustable AI
title_full_unstemmed As if sand were stone. New concepts and metrics to probe the ground on which to build trustable AI
title_short As if sand were stone. New concepts and metrics to probe the ground on which to build trustable AI
title_sort as if sand were stone. new concepts and metrics to probe the ground on which to build trustable ai
topic Research Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7488864/
https://www.ncbi.nlm.nih.gov/pubmed/32917183
http://dx.doi.org/10.1186/s12911-020-01224-9
work_keys_str_mv AT cabitzafederico asifsandwerestonenewconceptsandmetricstoprobethegroundonwhichtobuildtrustableai
AT campagnerandrea asifsandwerestonenewconceptsandmetricstoprobethegroundonwhichtobuildtrustableai
AT sconfienzalucamaria asifsandwerestonenewconceptsandmetricstoprobethegroundonwhichtobuildtrustableai