Cargando…
Are Experts Well-Calibrated? An Equivalence-Based Hypothesis Test
Estimates based on expert judgements of quantities of interest are commonly used to supplement or replace measurements when the latter are too expensive or impossible to obtain. Such estimates are commonly accompanied by information about the uncertainty of the estimate, such as a credible interval....
Autores principales: | , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
MDPI
2022
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9222732/ https://www.ncbi.nlm.nih.gov/pubmed/35741478 http://dx.doi.org/10.3390/e24060757 |
_version_ | 1784732941357678592 |
---|---|
author | Dharmarathne, Gayan Hanea, Anca M. Robinson, Andrew |
author_facet | Dharmarathne, Gayan Hanea, Anca M. Robinson, Andrew |
author_sort | Dharmarathne, Gayan |
collection | PubMed |
description | Estimates based on expert judgements of quantities of interest are commonly used to supplement or replace measurements when the latter are too expensive or impossible to obtain. Such estimates are commonly accompanied by information about the uncertainty of the estimate, such as a credible interval. To be considered well-calibrated, an expert’s credible intervals should cover the true (but unknown) values a certain percentage of time, equal to the percentage specified by the expert. To assess expert calibration, so-called calibration questions may be asked in an expert elicitation exercise; these are questions with known answers used to assess and compare experts’ performance. An approach that is commonly applied to assess experts’ performance by using these questions is to directly compare the stated percentage cover with the actual coverage. We show that this approach has statistical drawbacks when considered in a rigorous hypothesis testing framework. We generalize the test to an equivalence testing framework and discuss the properties of this new proposal. We show that comparisons made on even a modest number of calibration questions have poor power, which suggests that the formal testing of the calibration of experts in an experimental setting may be prohibitively expensive. We contextualise the theoretical findings with a couple of applications and discuss the implications of our findings. |
format | Online Article Text |
id | pubmed-9222732 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2022 |
publisher | MDPI |
record_format | MEDLINE/PubMed |
spelling | pubmed-92227322022-06-24 Are Experts Well-Calibrated? An Equivalence-Based Hypothesis Test Dharmarathne, Gayan Hanea, Anca M. Robinson, Andrew Entropy (Basel) Article Estimates based on expert judgements of quantities of interest are commonly used to supplement or replace measurements when the latter are too expensive or impossible to obtain. Such estimates are commonly accompanied by information about the uncertainty of the estimate, such as a credible interval. To be considered well-calibrated, an expert’s credible intervals should cover the true (but unknown) values a certain percentage of time, equal to the percentage specified by the expert. To assess expert calibration, so-called calibration questions may be asked in an expert elicitation exercise; these are questions with known answers used to assess and compare experts’ performance. An approach that is commonly applied to assess experts’ performance by using these questions is to directly compare the stated percentage cover with the actual coverage. We show that this approach has statistical drawbacks when considered in a rigorous hypothesis testing framework. We generalize the test to an equivalence testing framework and discuss the properties of this new proposal. We show that comparisons made on even a modest number of calibration questions have poor power, which suggests that the formal testing of the calibration of experts in an experimental setting may be prohibitively expensive. We contextualise the theoretical findings with a couple of applications and discuss the implications of our findings. MDPI 2022-05-27 /pmc/articles/PMC9222732/ /pubmed/35741478 http://dx.doi.org/10.3390/e24060757 Text en © 2022 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). |
spellingShingle | Article Dharmarathne, Gayan Hanea, Anca M. Robinson, Andrew Are Experts Well-Calibrated? An Equivalence-Based Hypothesis Test |
title | Are Experts Well-Calibrated? An Equivalence-Based Hypothesis Test |
title_full | Are Experts Well-Calibrated? An Equivalence-Based Hypothesis Test |
title_fullStr | Are Experts Well-Calibrated? An Equivalence-Based Hypothesis Test |
title_full_unstemmed | Are Experts Well-Calibrated? An Equivalence-Based Hypothesis Test |
title_short | Are Experts Well-Calibrated? An Equivalence-Based Hypothesis Test |
title_sort | are experts well-calibrated? an equivalence-based hypothesis test |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9222732/ https://www.ncbi.nlm.nih.gov/pubmed/35741478 http://dx.doi.org/10.3390/e24060757 |
work_keys_str_mv | AT dharmarathnegayan areexpertswellcalibratedanequivalencebasedhypothesistest AT haneaancam areexpertswellcalibratedanequivalencebasedhypothesistest AT robinsonandrew areexpertswellcalibratedanequivalencebasedhypothesistest |