Cargando…
Poster 121: Inter- and Intra-Observer Agreement are Unsatisfactory when Determining Study Design and Level of Evidence
OBJECTIVES: Understanding differences between types of study design (SD) and level of evidence (LOE) are important when selecting research for presentation or publication and determining its potential clinical impact. The purpose of this study was to evaluate inter- and intra-observer reliability wh...
Autores principales: | , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
SAGE Publications
2022
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9344158/ http://dx.doi.org/10.1177/2325967121S00682 |
Sumario: | OBJECTIVES: Understanding differences between types of study design (SD) and level of evidence (LOE) are important when selecting research for presentation or publication and determining its potential clinical impact. The purpose of this study was to evaluate inter- and intra-observer reliability when assigning LOE and SD as well as quantify the impact of a commonly used reference aid on these assessments. METHODS: Thirty-six accepted abstracts from the Pediatric Orthopaedic Society of North America (POSNA) 2021 annual meeting were selected for this study. Thirteen reviewers from the POSNA Evidence Based Practice Committee were asked to determine LOE and SD for each abstract, first without any assistance or resources. Four weeks later, abstracts were reviewed again with the guidance of the Journal of Bone and Joint Surgery (JBJS) LOE chart, which is adapted from the Oxford Centre for Evidence-Based Medicine. Inter- and intra-observer reliability were calculated using Fleiss’ kappa statistic (k). Chi-square analysis was used to compare the rate of SD-LOE mismatch between the first and second round of reviews. RESULTS: Inter-observer reliability for LOE improved slightly from fair (k=0.28) to moderate (k=0.43) with use of the JBJS chart. There was better agreement with increasing LOE, with the most frequent disagreement between levels 3 and 4. Inter-observer reliability for SD was fair for both rounds 1 (k=0.29) and 2 (k=0.37). Similar to LOE, there was better agreement with stronger study design. The most common disagreements were between retrospective cohort, retrospective case-control, and case series. Intra-observer reliability was widely variable for both LOE and SD (k=0.10 to 0.92 for both). When matching a selected SD to its associated LOE, the overall rate of correct concordance was 82% in round 1 and 92% in round 2 (p<0.001). Six of the 13 respondents improved in round 2, and 3 were 100% correct both times. CONCLUSIONS: Inter-observer reliability for LOE and SD was fair to moderate at best, even among experienced reviewers. Use of the JBJS/Oxford chart mildly improved agreement on LOE and resulted in less SD-LOE mismatch, but did not affect agreement on SD. Professional societies and journals may consider requiring more specific information on SD from authors, while authors and reviewers may benefit from improved instruments to guide SD and LOE designation. |
---|