Cargando…
Assessment of a method to detect signals for updating systematic reviews
BACKGROUND: Systematic reviews are a cornerstone of evidence-based medicine but are useful only if up-to-date. Methods for detecting signals of when a systematic review needs updating have face validity, but no proposed method has had an assessment of predictive validity performed. METHODS: The AHRQ...
Autores principales: | , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
BioMed Central
2014
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3937021/ https://www.ncbi.nlm.nih.gov/pubmed/24529068 http://dx.doi.org/10.1186/2046-4053-3-13 |
_version_ | 1782305412117168128 |
---|---|
author | Shekelle, Paul G Motala, Aneesa Johnsen, Breanne Newberry, Sydne J |
author_facet | Shekelle, Paul G Motala, Aneesa Johnsen, Breanne Newberry, Sydne J |
author_sort | Shekelle, Paul G |
collection | PubMed |
description | BACKGROUND: Systematic reviews are a cornerstone of evidence-based medicine but are useful only if up-to-date. Methods for detecting signals of when a systematic review needs updating have face validity, but no proposed method has had an assessment of predictive validity performed. METHODS: The AHRQ Comparative Effectiveness Review program had produced 13 comparative effectiveness reviews (CERs), a subcategory of systematic reviews, by 2009, 11 of which were assessed in 2009 using a surveillance system to determine the degree to which individual conclusions were out of date and to assign a priority for updating each report. Four CERs were judged to be a high priority for updating, four CERs were judged to be medium priority for updating, and three CERs were judged to be low priority for updating. AHRQ then commissioned full update reviews for 9 of these 11 CERs. Where possible, we matched the original conclusions with their corresponding conclusions in the update reports, and compared the congruence between these pairs with our original predictions about which conclusions in each CER remained valid. We then classified the concordance of each pair as good, fair, or poor. We also made a summary determination of the priority for updating each CER based on the actual changes in conclusions in the updated report, and compared these determinations with the earlier assessments of priority. RESULTS: The 9 CERs included 149 individual conclusions, 84% with matches in the update reports. Across reports, 83% of matched conclusions had good concordance, and 99% had good or fair concordance. The one instance of poor concordance was partially attributable to the publication of new evidence after the surveillance signal searches had been done. Both CERs originally judged as being low priority for updating had no substantive changes to their conclusions in the actual updated report. The agreement on overall priority for updating between prediction and actual changes to conclusions was Kappa = 0.74. CONCLUSIONS: These results provide some support for the validity of a surveillance system for detecting signals indicating when a systematic review needs updating. |
format | Online Article Text |
id | pubmed-3937021 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2014 |
publisher | BioMed Central |
record_format | MEDLINE/PubMed |
spelling | pubmed-39370212014-02-28 Assessment of a method to detect signals for updating systematic reviews Shekelle, Paul G Motala, Aneesa Johnsen, Breanne Newberry, Sydne J Syst Rev Research BACKGROUND: Systematic reviews are a cornerstone of evidence-based medicine but are useful only if up-to-date. Methods for detecting signals of when a systematic review needs updating have face validity, but no proposed method has had an assessment of predictive validity performed. METHODS: The AHRQ Comparative Effectiveness Review program had produced 13 comparative effectiveness reviews (CERs), a subcategory of systematic reviews, by 2009, 11 of which were assessed in 2009 using a surveillance system to determine the degree to which individual conclusions were out of date and to assign a priority for updating each report. Four CERs were judged to be a high priority for updating, four CERs were judged to be medium priority for updating, and three CERs were judged to be low priority for updating. AHRQ then commissioned full update reviews for 9 of these 11 CERs. Where possible, we matched the original conclusions with their corresponding conclusions in the update reports, and compared the congruence between these pairs with our original predictions about which conclusions in each CER remained valid. We then classified the concordance of each pair as good, fair, or poor. We also made a summary determination of the priority for updating each CER based on the actual changes in conclusions in the updated report, and compared these determinations with the earlier assessments of priority. RESULTS: The 9 CERs included 149 individual conclusions, 84% with matches in the update reports. Across reports, 83% of matched conclusions had good concordance, and 99% had good or fair concordance. The one instance of poor concordance was partially attributable to the publication of new evidence after the surveillance signal searches had been done. Both CERs originally judged as being low priority for updating had no substantive changes to their conclusions in the actual updated report. The agreement on overall priority for updating between prediction and actual changes to conclusions was Kappa = 0.74. CONCLUSIONS: These results provide some support for the validity of a surveillance system for detecting signals indicating when a systematic review needs updating. BioMed Central 2014-02-14 /pmc/articles/PMC3937021/ /pubmed/24529068 http://dx.doi.org/10.1186/2046-4053-3-13 Text en Copyright © 2014 Shekelle et al.; licensee BioMed Central Ltd. http://creativecommons.org/licenses/by/2.0 This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated. |
spellingShingle | Research Shekelle, Paul G Motala, Aneesa Johnsen, Breanne Newberry, Sydne J Assessment of a method to detect signals for updating systematic reviews |
title | Assessment of a method to detect signals for updating systematic reviews |
title_full | Assessment of a method to detect signals for updating systematic reviews |
title_fullStr | Assessment of a method to detect signals for updating systematic reviews |
title_full_unstemmed | Assessment of a method to detect signals for updating systematic reviews |
title_short | Assessment of a method to detect signals for updating systematic reviews |
title_sort | assessment of a method to detect signals for updating systematic reviews |
topic | Research |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3937021/ https://www.ncbi.nlm.nih.gov/pubmed/24529068 http://dx.doi.org/10.1186/2046-4053-3-13 |
work_keys_str_mv | AT shekellepaulg assessmentofamethodtodetectsignalsforupdatingsystematicreviews AT motalaaneesa assessmentofamethodtodetectsignalsforupdatingsystematicreviews AT johnsenbreanne assessmentofamethodtodetectsignalsforupdatingsystematicreviews AT newberrysydnej assessmentofamethodtodetectsignalsforupdatingsystematicreviews |