Cargando…
From the Trenches: A Cross-Sectional Study Applying the GRADE Tool in Systematic Reviews of Healthcare Interventions
BACKGROUND: GRADE was developed to address shortcomings of tools to rate the quality of a body of evidence. While much has been published about GRADE, there are few empirical and systematic evaluations. OBJECTIVE: To assess GRADE for systematic reviews (SRs) in terms of inter-rater agreement and ide...
Autores principales: | , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Public Library of Science
2012
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3320617/ https://www.ncbi.nlm.nih.gov/pubmed/22496843 http://dx.doi.org/10.1371/journal.pone.0034697 |
_version_ | 1782228870823411712 |
---|---|
author | Hartling, Lisa Fernandes, Ricardo M. Seida, Jennifer Vandermeer, Ben Dryden, Donna M. |
author_facet | Hartling, Lisa Fernandes, Ricardo M. Seida, Jennifer Vandermeer, Ben Dryden, Donna M. |
author_sort | Hartling, Lisa |
collection | PubMed |
description | BACKGROUND: GRADE was developed to address shortcomings of tools to rate the quality of a body of evidence. While much has been published about GRADE, there are few empirical and systematic evaluations. OBJECTIVE: To assess GRADE for systematic reviews (SRs) in terms of inter-rater agreement and identify areas of uncertainty. DESIGN: Cross-sectional, descriptive study. METHODS: We applied GRADE to three SRs (n = 48, 66, and 75 studies, respectively) with 29 comparisons and 12 outcomes overall. Two reviewers graded evidence independently for outcomes deemed clinically important a priori. Inter-rater reliability was assessed using kappas for four main domains (risk of bias, consistency, directness, and precision) and overall quality of evidence. RESULTS: For the first review, reliability was: κ = 0.41 for risk of bias; 0.84 consistency; 0.18 precision; and 0.44 overall quality. Kappa could not be calculated for directness as one rater assessed all items as direct; assessors agreed in 41% of cases. For the second review reliability was: 0.37 consistency and 0.19 precision. Kappa could not be assessed for other items; assessors agreed in 33% of cases for risk of bias; 100% directness; and 58% overall quality. For the third review, reliability was: 0.06 risk of bias; 0.79 consistency; 0.21 precision; and 0.18 overall quality. Assessors agreed in 100% of cases for directness. Precision created the most uncertainty due to difficulties in identifying “optimal” information size and “clinical decision threshold”, as well as making assessments when there was no meta-analysis. The risk of bias domain created uncertainty, particularly for nonrandomized studies. CONCLUSIONS: As researchers with varied levels of training and experience use GRADE, there is risk for variability in interpretation and application. This study shows variable agreement across the GRADE domains, reflecting areas where further guidance is required. |
format | Online Article Text |
id | pubmed-3320617 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2012 |
publisher | Public Library of Science |
record_format | MEDLINE/PubMed |
spelling | pubmed-33206172012-04-11 From the Trenches: A Cross-Sectional Study Applying the GRADE Tool in Systematic Reviews of Healthcare Interventions Hartling, Lisa Fernandes, Ricardo M. Seida, Jennifer Vandermeer, Ben Dryden, Donna M. PLoS One Research Article BACKGROUND: GRADE was developed to address shortcomings of tools to rate the quality of a body of evidence. While much has been published about GRADE, there are few empirical and systematic evaluations. OBJECTIVE: To assess GRADE for systematic reviews (SRs) in terms of inter-rater agreement and identify areas of uncertainty. DESIGN: Cross-sectional, descriptive study. METHODS: We applied GRADE to three SRs (n = 48, 66, and 75 studies, respectively) with 29 comparisons and 12 outcomes overall. Two reviewers graded evidence independently for outcomes deemed clinically important a priori. Inter-rater reliability was assessed using kappas for four main domains (risk of bias, consistency, directness, and precision) and overall quality of evidence. RESULTS: For the first review, reliability was: κ = 0.41 for risk of bias; 0.84 consistency; 0.18 precision; and 0.44 overall quality. Kappa could not be calculated for directness as one rater assessed all items as direct; assessors agreed in 41% of cases. For the second review reliability was: 0.37 consistency and 0.19 precision. Kappa could not be assessed for other items; assessors agreed in 33% of cases for risk of bias; 100% directness; and 58% overall quality. For the third review, reliability was: 0.06 risk of bias; 0.79 consistency; 0.21 precision; and 0.18 overall quality. Assessors agreed in 100% of cases for directness. Precision created the most uncertainty due to difficulties in identifying “optimal” information size and “clinical decision threshold”, as well as making assessments when there was no meta-analysis. The risk of bias domain created uncertainty, particularly for nonrandomized studies. CONCLUSIONS: As researchers with varied levels of training and experience use GRADE, there is risk for variability in interpretation and application. This study shows variable agreement across the GRADE domains, reflecting areas where further guidance is required. Public Library of Science 2012-04-05 /pmc/articles/PMC3320617/ /pubmed/22496843 http://dx.doi.org/10.1371/journal.pone.0034697 Text en Hartling et al. http://creativecommons.org/licenses/by/4.0/ This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are properly credited. |
spellingShingle | Research Article Hartling, Lisa Fernandes, Ricardo M. Seida, Jennifer Vandermeer, Ben Dryden, Donna M. From the Trenches: A Cross-Sectional Study Applying the GRADE Tool in Systematic Reviews of Healthcare Interventions |
title | From the Trenches: A Cross-Sectional Study Applying the GRADE Tool in Systematic Reviews of Healthcare Interventions |
title_full | From the Trenches: A Cross-Sectional Study Applying the GRADE Tool in Systematic Reviews of Healthcare Interventions |
title_fullStr | From the Trenches: A Cross-Sectional Study Applying the GRADE Tool in Systematic Reviews of Healthcare Interventions |
title_full_unstemmed | From the Trenches: A Cross-Sectional Study Applying the GRADE Tool in Systematic Reviews of Healthcare Interventions |
title_short | From the Trenches: A Cross-Sectional Study Applying the GRADE Tool in Systematic Reviews of Healthcare Interventions |
title_sort | from the trenches: a cross-sectional study applying the grade tool in systematic reviews of healthcare interventions |
topic | Research Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3320617/ https://www.ncbi.nlm.nih.gov/pubmed/22496843 http://dx.doi.org/10.1371/journal.pone.0034697 |
work_keys_str_mv | AT hartlinglisa fromthetrenchesacrosssectionalstudyapplyingthegradetoolinsystematicreviewsofhealthcareinterventions AT fernandesricardom fromthetrenchesacrosssectionalstudyapplyingthegradetoolinsystematicreviewsofhealthcareinterventions AT seidajennifer fromthetrenchesacrosssectionalstudyapplyingthegradetoolinsystematicreviewsofhealthcareinterventions AT vandermeerben fromthetrenchesacrosssectionalstudyapplyingthegradetoolinsystematicreviewsofhealthcareinterventions AT drydendonnam fromthetrenchesacrosssectionalstudyapplyingthegradetoolinsystematicreviewsofhealthcareinterventions |