Cargando…

Evaluation of Inter-Observer Reliability of Animal Welfare Indicators: Which Is the Best Index to Use?

SIMPLE SUMMARY: In order to be effective, on-farm welfare assessment protocols should always rely on reliable, as well as valid and feasible, indicators. Inter-observer reliability refers to the extent to which two or more observers are observing and recording data in the same way. The present study...

Descripción completa

Detalles Bibliográficos
Autores principales: Giammarino, Mauro, Mattiello, Silvana, Battini, Monica, Quatto, Piero, Battaglini, Luca Maria, Vieira, Ana C. L., Stilwell, George, Renna, Manuela
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2021
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8157558/
https://www.ncbi.nlm.nih.gov/pubmed/34069942
http://dx.doi.org/10.3390/ani11051445
Descripción
Sumario:SIMPLE SUMMARY: In order to be effective, on-farm welfare assessment protocols should always rely on reliable, as well as valid and feasible, indicators. Inter-observer reliability refers to the extent to which two or more observers are observing and recording data in the same way. The present study focuses on the problem of assessing inter-observer reliability in the case of dichotomous (e.g., yes/no) welfare indicators and the presence of two observers, in order to decide about the inclusion of indicators in welfare assessment protocols. We compared the performance of the most popular currently available agreement indexes. Some widely used indexes showed their inappropriateness to evaluate the inter-observer reliability when the agreement between observers was high. Other less used indexes, such as Bangdiwala’s [Formula: see text] or Gwet’s [Formula: see text] , were found to perform better and are therefore suggested to assess the inter-observer reliability of dichotomous indicators. ABSTRACT: This study focuses on the problem of assessing inter-observer reliability (IOR) in the case of dichotomous categorical animal-based welfare indicators and the presence of two observers. Based on observations obtained from Animal Welfare Indicators (AWIN) project surveys conducted on nine dairy goat farms, and using udder asymmetry as an indicator, we compared the performance of the most popular agreement indexes available in the literature: Scott’s [Formula: see text] , Cohen’s [Formula: see text] , [Formula: see text] , Holsti’s [Formula: see text] , Krippendorff’s [Formula: see text] , Hubert’s [Formula: see text] , Janson and Vegelius’ [Formula: see text] , Bangdiwala’s [Formula: see text] , Andrés and Marzo’s [Formula: see text] , and Gwet’s [Formula: see text]. Confidence intervals were calculated using closed formulas of variance estimates for [Formula: see text] , [Formula: see text] , [Formula: see text]   [Formula: see text]   [Formula: see text] , [Formula: see text] , [Formula: see text] , and [Formula: see text] , while the bootstrap and exact bootstrap methods were used for all the indexes. All the indexes and closed formulas of variance estimates were calculated using Microsoft Excel. The bootstrap method was performed with R software, while the exact bootstrap method was performed with SAS software. [Formula: see text] , [Formula: see text] and [Formula: see text] exhibited a paradoxical behavior, showing unacceptably low values even in the presence of very high concordance rates. [Formula: see text] and [Formula: see text] showed values very close to the concordance rate, independently of its value. Both bootstrap and exact bootstrap methods turned out to be simpler compared to the implementation of closed variance formulas and provided effective confidence intervals for all the considered indexes. The best approach for measuring IOR in these cases is the use of [Formula: see text] or [Formula: see text] , with bootstrap or exact bootstrap methods for confidence interval calculation.