Cargando…
Assessing treatment effects and publication bias across different specialties in medicine: a meta-epidemiological study
OBJECTIVES: To assess the prevalence of statistically significant treatment effects, adverse events and small-study effects (when small studies report more extreme results than large studies) and publication bias (over-reporting of statistically significant results) across medical specialties. DESIG...
Autores principales: | , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
BMJ Publishing Group
2021
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8442042/ https://www.ncbi.nlm.nih.gov/pubmed/34521659 http://dx.doi.org/10.1136/bmjopen-2020-045942 |
Sumario: | OBJECTIVES: To assess the prevalence of statistically significant treatment effects, adverse events and small-study effects (when small studies report more extreme results than large studies) and publication bias (over-reporting of statistically significant results) across medical specialties. DESIGN: Large meta-epidemiological study of treatment effects from the Cochrane Database of Systematic Reviews. METHODS: We investigated outcomes from 57 162 studies from 1922 to 2019, and overall 98 966 meta-analyses and 5534 large meta-analyses (≥10 studies). Egger’s and Harbord’s tests to detect small-study effects, limit meta-analysis and Copas selection models to bias-adjust effect estimates and generalised linear mixed models were used to analyse one of the largest collections of evidence in medicine. RESULTS: Medical specialties showed differences in the prevalence of statistically significant results of efficacy and safety outcomes. Treatment effects from primary studies published in high ranking journals were more likely to be statistically significant (OR=1.52; 95% CI 1.32 to 1.75) while randomised controlled trials were less likely to report a statistically significant effect (OR=0.90; 95% CI 0.86 to 0.94). Altogether 19% (95% CI 18% to 20%) of the large meta-analyses showed evidence for small-study effects, but only 3.9% (95% CI 3.4% to 4.4%) showed evidence for publication bias after further assessment of funnel plots. Adjusting treatment effects resulted in overall less evidence for efficacy. CONCLUSIONS: These results suggest that reporting of large treatment effects from small studies may cause greater concern than publication bias. Incentives should be created so that studies of the highest quality become more visible than studies that report more extreme results. |
---|