Cargando…
Measuring Agreement Using Guessing Models and Knowledge Coefficients
Several measures of agreement, such as the Perreault–Leigh coefficient, the [Formula: see text] , and the recent coefficient of van Oest, are based on explicit models of how judges make their ratings. To handle such measures of agreement under a common umbrella, we propose a class of models called g...
Autor principal: | |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Springer US
2023
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10444669/ https://www.ncbi.nlm.nih.gov/pubmed/37291419 http://dx.doi.org/10.1007/s11336-023-09919-4 |
Sumario: | Several measures of agreement, such as the Perreault–Leigh coefficient, the [Formula: see text] , and the recent coefficient of van Oest, are based on explicit models of how judges make their ratings. To handle such measures of agreement under a common umbrella, we propose a class of models called guessing models, which contains most models of how judges make their ratings. Every guessing model have an associated measure of agreement we call the knowledge coefficient. Under certain assumptions on the guessing models, the knowledge coefficient will be equal to the multi-rater Cohen’s kappa, Fleiss’ kappa, the Brennan–Prediger coefficient, or other less-established measures of agreement. We provide several sample estimators of the knowledge coefficient, valid under varying assumptions, and their asymptotic distributions. After a sensitivity analysis and a simulation study of confidence intervals, we find that the Brennan–Prediger coefficient typically outperforms the others, with much better coverage under unfavorable circumstances. SUPPLEMENTARY INFORMATION: The online version contains supplementary material available at 10.1007/s11336-023-09919-4. |
---|