Cargando…
Interrater agreement of two adverse drug reaction causality assessment methods: A randomised comparison of the Liverpool Adverse Drug Reaction Causality Assessment Tool and the World Health Organization-Uppsala Monitoring Centre system
INTRODUCTION: A new method to assess causality of suspected adverse drug reactions, the Liverpool Adverse Drug Reaction Causality Assessment Tool (LCAT), showed high interrater agreement when used by its developers. Our aim was to compare the interrater agreement achieved by LCAT to that achieved by...
Autores principales: | , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Public Library of Science
2017
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5325562/ https://www.ncbi.nlm.nih.gov/pubmed/28235001 http://dx.doi.org/10.1371/journal.pone.0172830 |
Sumario: | INTRODUCTION: A new method to assess causality of suspected adverse drug reactions, the Liverpool Adverse Drug Reaction Causality Assessment Tool (LCAT), showed high interrater agreement when used by its developers. Our aim was to compare the interrater agreement achieved by LCAT to that achieved by another causality assessment method, the World Health Organization-Uppsala Monitoring Centre system for standardised case causality assessment (WHO-UMC system), in our setting. METHODS: Four raters independently assessed adverse drug reaction causality of 48 drug-event pairs, identified during a hospital-based survey. A randomised design ensured that no washout period was required between assessments with the two methods. We compared the methods’ interrater agreement by calculating agreement proportions, kappa statistics, and the intraclass correlation coefficient. We identified potentially problematic questions in the LCAT by comparing raters’ responses to individual questions. RESULTS: Overall unweighted kappa was 0.61 (95% CI 0.43 to 0.80) on the WHO-UMC system and 0.27 (95% CI 0.074 to 0.46) on the LCAT. Pairwise unweighted Cohen kappa ranged from 0.33 to 1.0 on the WHO-UMC system and from 0.094 to 0.71 on the LCAT. The intraclass correlation coefficient was 0.86 (95% CI 0.74 to 0.92) on the WHO-UMC system and 0.61 (95% CI 0.39 to 0.77) on the LCAT. Two LCAT questions were identified as significant points of disagreement. DISCUSSION: We were unable to replicate the high interrater agreement achieved by the LCAT developers and instead found its interrater agreement to be lower than that achieved when using the WHO-UMC system. We identified potential reasons for this and recommend priority areas for improving the LCAT. |
---|