Cargando…
Reliability and validity of multicentre surveillance of surgical site infections after colorectal surgery
BACKGROUND: Surveillance is the cornerstone of surgical site infection prevention programs. The validity of the data collection and awareness of vulnerability to inter-rater variation is crucial for correct interpretation and use of surveillance data. The aim of this study was to investigate the rel...
Autores principales: | , , , , , , , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
BioMed Central
2022
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8780777/ https://www.ncbi.nlm.nih.gov/pubmed/35063009 http://dx.doi.org/10.1186/s13756-022-01050-w |
Sumario: | BACKGROUND: Surveillance is the cornerstone of surgical site infection prevention programs. The validity of the data collection and awareness of vulnerability to inter-rater variation is crucial for correct interpretation and use of surveillance data. The aim of this study was to investigate the reliability and validity of surgical site infection (SSI) surveillance after colorectal surgery in the Netherlands. METHODS: In this multicentre prospective observational study, seven Dutch hospitals performed SSI surveillance after colorectal surgeries performed in 2018 and/or 2019. When executing the surveillance, a local case assessment was performed to calculate the overall percentage agreement between raters within hospitals. Additionally, two case-vignette assessments were performed to estimate intra-rater and inter-rater reliability by calculating a weighted Cohen’s Kappa and Fleiss’ Kappa coefficient. To estimate the validity, answers of the two case-vignettes questionnaires were compared with the answers of an external medical panel. RESULTS: 1111 colorectal surgeries were included in this study with an overall SSI incidence of 8.8% (n = 98). From the local case assessment it was estimated that the overall percent agreement between raters within a hospital was good (mean 95%, range 90–100%). The Cohen’s Kappa estimated for the intra-rater reliability of case-vignette review varied from 0.73 to 1.00, indicating substantial to perfect agreement. The inter-rater reliability within hospitals showed more variation, with Kappa estimates ranging between 0.61 and 0.94. In total, 87.9% of the answers given by the raters were in accordance with the medical panel. CONCLUSIONS: This study showed that raters were consistent in their SSI-ascertainment (good reliability), but improvements can be made regarding the accuracy (moderate validity). Accuracy of surveillance may be improved by providing regular training, adapting definitions to reduce subjectivity, and by supporting surveillance through automation. |
---|