Cargando…

How is AMSTAR applied by authors – a call for better reporting

BACKGROUND: The assessment of multiple systematic reviews (AMSTAR) tool is widely used for investigating the methodological quality of systematic reviews (SR). Originally, AMSTAR was developed for SRs of randomized controlled trials (RCTs). Its applicability to SRs of other study designs remains unc...

Descripción completa

Detalles Bibliográficos
Autores principales: Pieper, Dawid, Koensgen, Nadja, Breuing, Jessica, Ge, Long, Wegewitz, Uta
Formato: Online Artículo Texto
Lenguaje:English
Publicado: BioMed Central 2018
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6006845/
https://www.ncbi.nlm.nih.gov/pubmed/29914386
http://dx.doi.org/10.1186/s12874-018-0520-z
_version_ 1783332923356741632
author Pieper, Dawid
Koensgen, Nadja
Breuing, Jessica
Ge, Long
Wegewitz, Uta
author_facet Pieper, Dawid
Koensgen, Nadja
Breuing, Jessica
Ge, Long
Wegewitz, Uta
author_sort Pieper, Dawid
collection PubMed
description BACKGROUND: The assessment of multiple systematic reviews (AMSTAR) tool is widely used for investigating the methodological quality of systematic reviews (SR). Originally, AMSTAR was developed for SRs of randomized controlled trials (RCTs). Its applicability to SRs of other study designs remains unclear. Our objectives were to: 1) analyze how AMSTAR is applied by authors and (2) analyze whether the authors pay attention to the original purpose of AMSTAR and for what it has been validated. METHODS: We searched MEDLINE (via PubMed) from inception through October 2016 to identify studies that applied AMSTAR. Full-text studies were sought for all retrieved hits and screened by one reviewer. A second reviewer verified the excluded studies (liberal acceleration). Data were extracted into structured tables by one reviewer and were checked by a second reviewer. Discrepancies at any stage were resolved by consensus or by consulting a third person. We analyzed the data descriptively as frequencies or medians and interquartile ranges (IQRs). Associations were quantified using the risk ratio (RR), with 95% confidence intervals. RESULTS: We identified 247 studies. They included a median of 17 reviews (interquartile range (IQR): 8 to 47) per study. AMSTAR was modified in 23% (57/247) of studies. In most studies, an AMSTAR score was calculated (200/247; 81%). Methods for calculating an AMSTAR score varied, with summing up all yes answers (yes = 1) being the most frequent option (102/200; 51%). More than one third of the authors failed to report how the AMSTAR score was obtained (71/200; 36%). In a subgroup analysis, we compared overviews of reviews (n = 154) with the methodological publications (n = 93). The overviews of reviews were much less likely to mention both limitations with respect to study designs (if other studies other than RCTs were included in the reviews) (RR 0.27, 95% CI 0.09 to 0.75) and overall score (RR 0.08, 95% CI 0.02 to 0.35). CONCLUSIONS: Authors, peer reviewers, and editors should pay more attention to the correct use and reporting of assessment tools in evidence synthesis. Authors of overviews of reviews should ensure to have a methodological expert in their review team.
format Online
Article
Text
id pubmed-6006845
institution National Center for Biotechnology Information
language English
publishDate 2018
publisher BioMed Central
record_format MEDLINE/PubMed
spelling pubmed-60068452018-06-26 How is AMSTAR applied by authors – a call for better reporting Pieper, Dawid Koensgen, Nadja Breuing, Jessica Ge, Long Wegewitz, Uta BMC Med Res Methodol Research Article BACKGROUND: The assessment of multiple systematic reviews (AMSTAR) tool is widely used for investigating the methodological quality of systematic reviews (SR). Originally, AMSTAR was developed for SRs of randomized controlled trials (RCTs). Its applicability to SRs of other study designs remains unclear. Our objectives were to: 1) analyze how AMSTAR is applied by authors and (2) analyze whether the authors pay attention to the original purpose of AMSTAR and for what it has been validated. METHODS: We searched MEDLINE (via PubMed) from inception through October 2016 to identify studies that applied AMSTAR. Full-text studies were sought for all retrieved hits and screened by one reviewer. A second reviewer verified the excluded studies (liberal acceleration). Data were extracted into structured tables by one reviewer and were checked by a second reviewer. Discrepancies at any stage were resolved by consensus or by consulting a third person. We analyzed the data descriptively as frequencies or medians and interquartile ranges (IQRs). Associations were quantified using the risk ratio (RR), with 95% confidence intervals. RESULTS: We identified 247 studies. They included a median of 17 reviews (interquartile range (IQR): 8 to 47) per study. AMSTAR was modified in 23% (57/247) of studies. In most studies, an AMSTAR score was calculated (200/247; 81%). Methods for calculating an AMSTAR score varied, with summing up all yes answers (yes = 1) being the most frequent option (102/200; 51%). More than one third of the authors failed to report how the AMSTAR score was obtained (71/200; 36%). In a subgroup analysis, we compared overviews of reviews (n = 154) with the methodological publications (n = 93). The overviews of reviews were much less likely to mention both limitations with respect to study designs (if other studies other than RCTs were included in the reviews) (RR 0.27, 95% CI 0.09 to 0.75) and overall score (RR 0.08, 95% CI 0.02 to 0.35). CONCLUSIONS: Authors, peer reviewers, and editors should pay more attention to the correct use and reporting of assessment tools in evidence synthesis. Authors of overviews of reviews should ensure to have a methodological expert in their review team. BioMed Central 2018-06-18 /pmc/articles/PMC6006845/ /pubmed/29914386 http://dx.doi.org/10.1186/s12874-018-0520-z Text en © The Author(s). 2018 Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.
spellingShingle Research Article
Pieper, Dawid
Koensgen, Nadja
Breuing, Jessica
Ge, Long
Wegewitz, Uta
How is AMSTAR applied by authors – a call for better reporting
title How is AMSTAR applied by authors – a call for better reporting
title_full How is AMSTAR applied by authors – a call for better reporting
title_fullStr How is AMSTAR applied by authors – a call for better reporting
title_full_unstemmed How is AMSTAR applied by authors – a call for better reporting
title_short How is AMSTAR applied by authors – a call for better reporting
title_sort how is amstar applied by authors – a call for better reporting
topic Research Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6006845/
https://www.ncbi.nlm.nih.gov/pubmed/29914386
http://dx.doi.org/10.1186/s12874-018-0520-z
work_keys_str_mv AT pieperdawid howisamstarappliedbyauthorsacallforbetterreporting
AT koensgennadja howisamstarappliedbyauthorsacallforbetterreporting
AT breuingjessica howisamstarappliedbyauthorsacallforbetterreporting
AT gelong howisamstarappliedbyauthorsacallforbetterreporting
AT wegewitzuta howisamstarappliedbyauthorsacallforbetterreporting