Cargando…
Do simple screening statistical tools help to detect reporting bias?
BACKGROUND: As a result of reporting bias, or frauds, false or misunderstood findings may represent the majority of published research claims. This article provides simple methods that might help to appraise the quality of the reporting of randomized, controlled trials (RCT). METHODS: This evaluatio...
Autores principales: | , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Springer
2013
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3847052/ https://www.ncbi.nlm.nih.gov/pubmed/24004521 http://dx.doi.org/10.1186/2110-5820-3-29 |
_version_ | 1782293528651497472 |
---|---|
author | Pirracchio, Romain Resche-Rigon, Matthieu Chevret, Sylvie Journois, Didier |
author_facet | Pirracchio, Romain Resche-Rigon, Matthieu Chevret, Sylvie Journois, Didier |
author_sort | Pirracchio, Romain |
collection | PubMed |
description | BACKGROUND: As a result of reporting bias, or frauds, false or misunderstood findings may represent the majority of published research claims. This article provides simple methods that might help to appraise the quality of the reporting of randomized, controlled trials (RCT). METHODS: This evaluation roadmap proposed herein relies on four steps: evaluation of the distribution of the reported variables; evaluation of the distribution of the reported p values; data simulation using parametric bootstrap and explicit computation of the p values. Such an approach was illustrated using published data from a retracted RCT comparing a hydroxyethyl starch versus albumin-based priming for cardiopulmonary bypass. RESULTS: Despite obvious nonnormal distributions, several variables are presented as if they were normally distributed. The set of 16 p values testing for differences in baseline characteristics across randomized groups did not follow a Uniform distribution on [0,1] (p = 0.045). The p values obtained by explicit computations were different from the results reported by the authors for the two following variables: urine output at 5 hours (calculated p value < 10(-6), reported p ≥ 0.05); packed red blood cells (PRBC) during surgery (calculated p value = 0.08; reported p < 0.05). Finally, parametric bootstrap found p value > 0.05 in only 5 of the 10,000 simulated datasets concerning urine output 5 hours after surgery. Concerning PRBC transfused during surgery, parametric bootstrap showed that only the corresponding p value had less than a 50% chance to be inferior to 0.05 (3,920/10,000, p value < 0.05). CONCLUSIONS: Such simple evaluation methods might offer some warning signals. However, it should be emphasized that such methods do not allow concluding to the presence of error or fraud but should rather be used to justify asking for an access to the raw data. |
format | Online Article Text |
id | pubmed-3847052 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2013 |
publisher | Springer |
record_format | MEDLINE/PubMed |
spelling | pubmed-38470522013-12-06 Do simple screening statistical tools help to detect reporting bias? Pirracchio, Romain Resche-Rigon, Matthieu Chevret, Sylvie Journois, Didier Ann Intensive Care Research BACKGROUND: As a result of reporting bias, or frauds, false or misunderstood findings may represent the majority of published research claims. This article provides simple methods that might help to appraise the quality of the reporting of randomized, controlled trials (RCT). METHODS: This evaluation roadmap proposed herein relies on four steps: evaluation of the distribution of the reported variables; evaluation of the distribution of the reported p values; data simulation using parametric bootstrap and explicit computation of the p values. Such an approach was illustrated using published data from a retracted RCT comparing a hydroxyethyl starch versus albumin-based priming for cardiopulmonary bypass. RESULTS: Despite obvious nonnormal distributions, several variables are presented as if they were normally distributed. The set of 16 p values testing for differences in baseline characteristics across randomized groups did not follow a Uniform distribution on [0,1] (p = 0.045). The p values obtained by explicit computations were different from the results reported by the authors for the two following variables: urine output at 5 hours (calculated p value < 10(-6), reported p ≥ 0.05); packed red blood cells (PRBC) during surgery (calculated p value = 0.08; reported p < 0.05). Finally, parametric bootstrap found p value > 0.05 in only 5 of the 10,000 simulated datasets concerning urine output 5 hours after surgery. Concerning PRBC transfused during surgery, parametric bootstrap showed that only the corresponding p value had less than a 50% chance to be inferior to 0.05 (3,920/10,000, p value < 0.05). CONCLUSIONS: Such simple evaluation methods might offer some warning signals. However, it should be emphasized that such methods do not allow concluding to the presence of error or fraud but should rather be used to justify asking for an access to the raw data. Springer 2013-09-02 /pmc/articles/PMC3847052/ /pubmed/24004521 http://dx.doi.org/10.1186/2110-5820-3-29 Text en Copyright © 2013 Pirracchio et al.; licensee Springer. http://creativecommons.org/licenses/by/2.0 This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. |
spellingShingle | Research Pirracchio, Romain Resche-Rigon, Matthieu Chevret, Sylvie Journois, Didier Do simple screening statistical tools help to detect reporting bias? |
title | Do simple screening statistical tools help to detect reporting bias? |
title_full | Do simple screening statistical tools help to detect reporting bias? |
title_fullStr | Do simple screening statistical tools help to detect reporting bias? |
title_full_unstemmed | Do simple screening statistical tools help to detect reporting bias? |
title_short | Do simple screening statistical tools help to detect reporting bias? |
title_sort | do simple screening statistical tools help to detect reporting bias? |
topic | Research |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3847052/ https://www.ncbi.nlm.nih.gov/pubmed/24004521 http://dx.doi.org/10.1186/2110-5820-3-29 |
work_keys_str_mv | AT pirracchioromain dosimplescreeningstatisticaltoolshelptodetectreportingbias AT rescherigonmatthieu dosimplescreeningstatisticaltoolshelptodetectreportingbias AT chevretsylvie dosimplescreeningstatisticaltoolshelptodetectreportingbias AT journoisdidier dosimplescreeningstatisticaltoolshelptodetectreportingbias |