Cargando…
Evaluations of statistical methods for outlier detection when benchmarking in clinical registries: a systematic review
OBJECTIVES: Benchmarking is common in clinical registries to support the improvement of health outcomes by identifying underperforming clinician or health service providers. Despite the rise in clinical registries and interest in publicly reporting benchmarking results, appropriate methods for bench...
Autores principales: | , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
BMJ Publishing Group
2023
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10351235/ https://www.ncbi.nlm.nih.gov/pubmed/37451708 http://dx.doi.org/10.1136/bmjopen-2022-069130 |
Sumario: | OBJECTIVES: Benchmarking is common in clinical registries to support the improvement of health outcomes by identifying underperforming clinician or health service providers. Despite the rise in clinical registries and interest in publicly reporting benchmarking results, appropriate methods for benchmarking and outlier detection within clinical registries are not well established, and the current application of methods is inconsistent. The aim of this review was to determine the current statistical methods of outlier detection that have been evaluated in the context of clinical registry benchmarking. DESIGN: A systematic search for studies evaluating the performance of methods to detect outliers when benchmarking in clinical registries was conducted in five databases: EMBASE, ProQuest, Scopus, Web of Science and Google Scholar. A modified healthcare modelling evaluation tool was used to assess quality; data extracted from each study were summarised and presented in a narrative synthesis. RESULTS: Nineteen studies evaluating a variety of statistical methods in 20 clinical registries were included. The majority of studies conducted application studies comparing outliers without statistical performance assessment (79%), while only few studies used simulations to conduct more rigorous evaluations (21%). A common comparison was between random effects and fixed effects regression, which provided mixed results. Registry population coverage, provider case volume minimum and missing data handling were all poorly reported. CONCLUSIONS: The optimal methods for detecting outliers when benchmarking clinical registry data remains unclear, and the use of different models may provide vastly different results. Further research is needed to address the unresolved methodological considerations and evaluate methods across a range of registry conditions. PROSPERO REGISTRATION NUMBER: CRD42022296520. |
---|