Cargando…

Kappa statistic considerations in evaluating inter-rater reliability between two raters: which, when and context matters

BACKGROUND: In research designs that rely on observational ratings provided by two raters, assessing inter-rater reliability (IRR) is a frequently required task. However, some studies fall short in properly utilizing statistical procedures, omitting essential information necessary for interpreting t...

Descripción completa

Detalles Bibliográficos
Autores principales: Li, Ming, Gao, Qian, Yu, Tianfei
Formato: Online Artículo Texto
Lenguaje:English
Publicado: BioMed Central 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10464133/
https://www.ncbi.nlm.nih.gov/pubmed/37626309
http://dx.doi.org/10.1186/s12885-023-11325-z
_version_ 1785098397901914112
author Li, Ming
Gao, Qian
Yu, Tianfei
author_facet Li, Ming
Gao, Qian
Yu, Tianfei
author_sort Li, Ming
collection PubMed
description BACKGROUND: In research designs that rely on observational ratings provided by two raters, assessing inter-rater reliability (IRR) is a frequently required task. However, some studies fall short in properly utilizing statistical procedures, omitting essential information necessary for interpreting their findings, or inadequately addressing the impact of IRR on subsequent analyses’ statistical power for hypothesis testing. METHODS: This article delves into the recent publication by Liu et al. in BMC Cancer, analyzing the controversy surrounding the Kappa statistic and methodological issues concerning the assessment of IRR. The primary focus is on the appropriate selection of Kappa statistics, as well as the computation, interpretation, and reporting of two frequently used IRR statistics when there are two raters involved. RESULTS: The Cohen’s Kappa statistic is typically utilized to assess the level of agreement between two raters when there are two categories or for unordered categorical variables with three or more categories. On the other hand, when it comes to evaluating the degree of agreement between two raters for ordered categorical variables comprising three or more categories, the weighted Kappa is a widely used measure. CONCLUSION: Despite not substantially affecting the findings of Liu et al.?s study, the statistical dispute underscores the significance of employing suitable statistical methods. Rigorous and accurate statistical results are crucial for producing trustworthy research.
format Online
Article
Text
id pubmed-10464133
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher BioMed Central
record_format MEDLINE/PubMed
spelling pubmed-104641332023-08-30 Kappa statistic considerations in evaluating inter-rater reliability between two raters: which, when and context matters Li, Ming Gao, Qian Yu, Tianfei BMC Cancer Matters Arising BACKGROUND: In research designs that rely on observational ratings provided by two raters, assessing inter-rater reliability (IRR) is a frequently required task. However, some studies fall short in properly utilizing statistical procedures, omitting essential information necessary for interpreting their findings, or inadequately addressing the impact of IRR on subsequent analyses’ statistical power for hypothesis testing. METHODS: This article delves into the recent publication by Liu et al. in BMC Cancer, analyzing the controversy surrounding the Kappa statistic and methodological issues concerning the assessment of IRR. The primary focus is on the appropriate selection of Kappa statistics, as well as the computation, interpretation, and reporting of two frequently used IRR statistics when there are two raters involved. RESULTS: The Cohen’s Kappa statistic is typically utilized to assess the level of agreement between two raters when there are two categories or for unordered categorical variables with three or more categories. On the other hand, when it comes to evaluating the degree of agreement between two raters for ordered categorical variables comprising three or more categories, the weighted Kappa is a widely used measure. CONCLUSION: Despite not substantially affecting the findings of Liu et al.?s study, the statistical dispute underscores the significance of employing suitable statistical methods. Rigorous and accurate statistical results are crucial for producing trustworthy research. BioMed Central 2023-08-25 /pmc/articles/PMC10464133/ /pubmed/37626309 http://dx.doi.org/10.1186/s12885-023-11325-z Text en © The Author(s) 2023 https://creativecommons.org/licenses/by/4.0/Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) . The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/ (https://creativecommons.org/publicdomain/zero/1.0/) ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.
spellingShingle Matters Arising
Li, Ming
Gao, Qian
Yu, Tianfei
Kappa statistic considerations in evaluating inter-rater reliability between two raters: which, when and context matters
title Kappa statistic considerations in evaluating inter-rater reliability between two raters: which, when and context matters
title_full Kappa statistic considerations in evaluating inter-rater reliability between two raters: which, when and context matters
title_fullStr Kappa statistic considerations in evaluating inter-rater reliability between two raters: which, when and context matters
title_full_unstemmed Kappa statistic considerations in evaluating inter-rater reliability between two raters: which, when and context matters
title_short Kappa statistic considerations in evaluating inter-rater reliability between two raters: which, when and context matters
title_sort kappa statistic considerations in evaluating inter-rater reliability between two raters: which, when and context matters
topic Matters Arising
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10464133/
https://www.ncbi.nlm.nih.gov/pubmed/37626309
http://dx.doi.org/10.1186/s12885-023-11325-z
work_keys_str_mv AT liming kappastatisticconsiderationsinevaluatinginterraterreliabilitybetweentworaterswhichwhenandcontextmatters
AT gaoqian kappastatisticconsiderationsinevaluatinginterraterreliabilitybetweentworaterswhichwhenandcontextmatters
AT yutianfei kappastatisticconsiderationsinevaluatinginterraterreliabilitybetweentworaterswhichwhenandcontextmatters