Cargando…
A test for reporting bias in trial networks: simulation and case studies
BACKGROUND: Networks of trials assessing several treatment options available for the same condition are increasingly considered. Randomized trial evidence may be missing because of reporting bias. We propose a test for reporting bias in trial networks. METHODS: We test whether there is an excess of...
Autores principales: | , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
BioMed Central
2014
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4193287/ https://www.ncbi.nlm.nih.gov/pubmed/25262204 http://dx.doi.org/10.1186/1471-2288-14-112 |
_version_ | 1782338947839426560 |
---|---|
author | Trinquart, Ludovic Ioannidis, John PA Chatellier, Gilles Ravaud, Philippe |
author_facet | Trinquart, Ludovic Ioannidis, John PA Chatellier, Gilles Ravaud, Philippe |
author_sort | Trinquart, Ludovic |
collection | PubMed |
description | BACKGROUND: Networks of trials assessing several treatment options available for the same condition are increasingly considered. Randomized trial evidence may be missing because of reporting bias. We propose a test for reporting bias in trial networks. METHODS: We test whether there is an excess of trials with statistically significant results across a network of trials. The observed number of trials with nominally statistically significant p-values across the network is compared with the expected number. The performance of the test (type I error rate and power) was assessed using simulation studies under different scenarios of selective reporting bias. Examples are provided for networks of antidepressant and antipsychotic trials, where reporting biases have been previously demonstrated by comparing published to Food and Drug Administration (FDA) data. RESULTS: In simulations, the test maintained the type I error rate and was moderately powerful after adjustment for type I error rate, except when the between-trial variance was substantial. In all, a positive test result increased moderately or markedly the probability of reporting bias being present, while a negative test result was not very informative. In the two examples, the test gave a signal for an excess of statistically significant results in the network of published data but not in the network of FDA data. CONCLUSION: The test could be useful to document an excess of significant findings in trial networks, providing a signal for potential publication bias or other selective analysis and outcome reporting biases. ELECTRONIC SUPPLEMENTARY MATERIAL: The online version of this article (doi:10.1186/1471-2288-14-112) contains supplementary material, which is available to authorized users. |
format | Online Article Text |
id | pubmed-4193287 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2014 |
publisher | BioMed Central |
record_format | MEDLINE/PubMed |
spelling | pubmed-41932872014-10-11 A test for reporting bias in trial networks: simulation and case studies Trinquart, Ludovic Ioannidis, John PA Chatellier, Gilles Ravaud, Philippe BMC Med Res Methodol Research Article BACKGROUND: Networks of trials assessing several treatment options available for the same condition are increasingly considered. Randomized trial evidence may be missing because of reporting bias. We propose a test for reporting bias in trial networks. METHODS: We test whether there is an excess of trials with statistically significant results across a network of trials. The observed number of trials with nominally statistically significant p-values across the network is compared with the expected number. The performance of the test (type I error rate and power) was assessed using simulation studies under different scenarios of selective reporting bias. Examples are provided for networks of antidepressant and antipsychotic trials, where reporting biases have been previously demonstrated by comparing published to Food and Drug Administration (FDA) data. RESULTS: In simulations, the test maintained the type I error rate and was moderately powerful after adjustment for type I error rate, except when the between-trial variance was substantial. In all, a positive test result increased moderately or markedly the probability of reporting bias being present, while a negative test result was not very informative. In the two examples, the test gave a signal for an excess of statistically significant results in the network of published data but not in the network of FDA data. CONCLUSION: The test could be useful to document an excess of significant findings in trial networks, providing a signal for potential publication bias or other selective analysis and outcome reporting biases. ELECTRONIC SUPPLEMENTARY MATERIAL: The online version of this article (doi:10.1186/1471-2288-14-112) contains supplementary material, which is available to authorized users. BioMed Central 2014-09-27 /pmc/articles/PMC4193287/ /pubmed/25262204 http://dx.doi.org/10.1186/1471-2288-14-112 Text en © Trinquart et al.; licensee BioMed Central Ltd. 2014 This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated. |
spellingShingle | Research Article Trinquart, Ludovic Ioannidis, John PA Chatellier, Gilles Ravaud, Philippe A test for reporting bias in trial networks: simulation and case studies |
title | A test for reporting bias in trial networks: simulation and case studies |
title_full | A test for reporting bias in trial networks: simulation and case studies |
title_fullStr | A test for reporting bias in trial networks: simulation and case studies |
title_full_unstemmed | A test for reporting bias in trial networks: simulation and case studies |
title_short | A test for reporting bias in trial networks: simulation and case studies |
title_sort | test for reporting bias in trial networks: simulation and case studies |
topic | Research Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4193287/ https://www.ncbi.nlm.nih.gov/pubmed/25262204 http://dx.doi.org/10.1186/1471-2288-14-112 |
work_keys_str_mv | AT trinquartludovic atestforreportingbiasintrialnetworkssimulationandcasestudies AT ioannidisjohnpa atestforreportingbiasintrialnetworkssimulationandcasestudies AT chatelliergilles atestforreportingbiasintrialnetworkssimulationandcasestudies AT ravaudphilippe atestforreportingbiasintrialnetworkssimulationandcasestudies AT trinquartludovic testforreportingbiasintrialnetworkssimulationandcasestudies AT ioannidisjohnpa testforreportingbiasintrialnetworkssimulationandcasestudies AT chatelliergilles testforreportingbiasintrialnetworkssimulationandcasestudies AT ravaudphilippe testforreportingbiasintrialnetworkssimulationandcasestudies |