Cargando…

Closed-Loop Uncertainty: The Evaluation and Calibration of Uncertainty for Human–Machine Teams under Data Drift

Though an accurate measurement of entropy, or more generally uncertainty, is critical to the success of human–machine teams, the evaluation of the accuracy of such metrics as a probability of machine correctness is often aggregated and not assessed as an iterative control process. The entropy of the...

Descripción completa

Detalles Bibliográficos
Autores principales: Bishof, Zachary, Scheuerman, Jaelle, Michael, Chris J.
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10606420/
https://www.ncbi.nlm.nih.gov/pubmed/37895564
http://dx.doi.org/10.3390/e25101443
_version_ 1785127312029646848
author Bishof, Zachary
Scheuerman, Jaelle
Michael, Chris J.
author_facet Bishof, Zachary
Scheuerman, Jaelle
Michael, Chris J.
author_sort Bishof, Zachary
collection PubMed
description Though an accurate measurement of entropy, or more generally uncertainty, is critical to the success of human–machine teams, the evaluation of the accuracy of such metrics as a probability of machine correctness is often aggregated and not assessed as an iterative control process. The entropy of the decisions made by human–machine teams may not be accurately measured under cold start or at times of data drift unless disagreements between the human and machine are immediately fed back to the classifier iteratively. In this study, we present a stochastic framework by which an uncertainty model may be evaluated iteratively as a probability of machine correctness. We target a novel problem, referred to as the threshold selection problem, which involves a user subjectively selecting the point at which a signal transitions to a low state. This problem is designed to be simple and replicable for human–machine experimentation while exhibiting properties of more complex applications. Finally, we explore the potential of incorporating feedback of machine correctness into a baseline naïve Bayes uncertainty model with a novel reinforcement learning approach. The approach refines a baseline uncertainty model by incorporating machine correctness at every iteration. Experiments are conducted over a large number of realizations to properly evaluate uncertainty at each iteration of the human–machine team. Results show that our novel approach, called closed-loop uncertainty, outperforms the baseline in every case, yielding about 45% improvement on average.
format Online
Article
Text
id pubmed-10606420
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-106064202023-10-28 Closed-Loop Uncertainty: The Evaluation and Calibration of Uncertainty for Human–Machine Teams under Data Drift Bishof, Zachary Scheuerman, Jaelle Michael, Chris J. Entropy (Basel) Article Though an accurate measurement of entropy, or more generally uncertainty, is critical to the success of human–machine teams, the evaluation of the accuracy of such metrics as a probability of machine correctness is often aggregated and not assessed as an iterative control process. The entropy of the decisions made by human–machine teams may not be accurately measured under cold start or at times of data drift unless disagreements between the human and machine are immediately fed back to the classifier iteratively. In this study, we present a stochastic framework by which an uncertainty model may be evaluated iteratively as a probability of machine correctness. We target a novel problem, referred to as the threshold selection problem, which involves a user subjectively selecting the point at which a signal transitions to a low state. This problem is designed to be simple and replicable for human–machine experimentation while exhibiting properties of more complex applications. Finally, we explore the potential of incorporating feedback of machine correctness into a baseline naïve Bayes uncertainty model with a novel reinforcement learning approach. The approach refines a baseline uncertainty model by incorporating machine correctness at every iteration. Experiments are conducted over a large number of realizations to properly evaluate uncertainty at each iteration of the human–machine team. Results show that our novel approach, called closed-loop uncertainty, outperforms the baseline in every case, yielding about 45% improvement on average. MDPI 2023-10-12 /pmc/articles/PMC10606420/ /pubmed/37895564 http://dx.doi.org/10.3390/e25101443 Text en © 2023 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Bishof, Zachary
Scheuerman, Jaelle
Michael, Chris J.
Closed-Loop Uncertainty: The Evaluation and Calibration of Uncertainty for Human–Machine Teams under Data Drift
title Closed-Loop Uncertainty: The Evaluation and Calibration of Uncertainty for Human–Machine Teams under Data Drift
title_full Closed-Loop Uncertainty: The Evaluation and Calibration of Uncertainty for Human–Machine Teams under Data Drift
title_fullStr Closed-Loop Uncertainty: The Evaluation and Calibration of Uncertainty for Human–Machine Teams under Data Drift
title_full_unstemmed Closed-Loop Uncertainty: The Evaluation and Calibration of Uncertainty for Human–Machine Teams under Data Drift
title_short Closed-Loop Uncertainty: The Evaluation and Calibration of Uncertainty for Human–Machine Teams under Data Drift
title_sort closed-loop uncertainty: the evaluation and calibration of uncertainty for human–machine teams under data drift
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10606420/
https://www.ncbi.nlm.nih.gov/pubmed/37895564
http://dx.doi.org/10.3390/e25101443
work_keys_str_mv AT bishofzachary closedloopuncertaintytheevaluationandcalibrationofuncertaintyforhumanmachineteamsunderdatadrift
AT scheuermanjaelle closedloopuncertaintytheevaluationandcalibrationofuncertaintyforhumanmachineteamsunderdatadrift
AT michaelchrisj closedloopuncertaintytheevaluationandcalibrationofuncertaintyforhumanmachineteamsunderdatadrift