Cargando…

Examining publication bias—a simulation-based evaluation of statistical tests on publication bias

BACKGROUND: Publication bias is a form of scientific misconduct. It threatens the validity of research results and the credibility of science. Although several tests on publication bias exist, no in-depth evaluations are available that examine which test performs best for different research settings...

Descripción completa

Detalles Bibliográficos
Autor principal: Schneck, Andreas
Formato: Online Artículo Texto
Lenguaje:English
Publicado: PeerJ Inc. 2017
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5712469/
https://www.ncbi.nlm.nih.gov/pubmed/29204324
http://dx.doi.org/10.7717/peerj.4115
_version_ 1783283225661014016
author Schneck, Andreas
author_facet Schneck, Andreas
author_sort Schneck, Andreas
collection PubMed
description BACKGROUND: Publication bias is a form of scientific misconduct. It threatens the validity of research results and the credibility of science. Although several tests on publication bias exist, no in-depth evaluations are available that examine which test performs best for different research settings. METHODS: Four tests on publication bias, Egger’s test (FAT), p-uniform, the test of excess significance (TES), as well as the caliper test, were evaluated in a Monte Carlo simulation. Two different types of publication bias and its degree (0%, 50%, 100%) were simulated. The type of publication bias was defined either as file-drawer, meaning the repeated analysis of new datasets, or p-hacking, meaning the inclusion of covariates in order to obtain a significant result. In addition, the underlying effect (β = 0, 0.5, 1, 1.5), effect heterogeneity, the number of observations in the simulated primary studies (N = 100, 500), and the number of observations for the publication bias tests (K = 100, 1,000) were varied. RESULTS: All tests evaluated were able to identify publication bias both in the file-drawer and p-hacking condition. The false positive rates were, with the exception of the 15%- and 20%-caliper test, unbiased. The FAT had the largest statistical power in the file-drawer conditions, whereas under p-hacking the TES was, except under effect heterogeneity, slightly better. The CTs were, however, inferior to the other tests under effect homogeneity and had a decent statistical power only in conditions with 1,000 primary studies. DISCUSSION: The FAT is recommended as a test for publication bias in standard meta-analyses with no or only small effect heterogeneity. If two-sided publication bias is suspected as well as under p-hacking the TES is the first alternative to the FAT. The 5%-caliper test is recommended under conditions of effect heterogeneity and a large number of primary studies, which may be found if publication bias is examined in a discipline-wide setting when primary studies cover different research problems.
format Online
Article
Text
id pubmed-5712469
institution National Center for Biotechnology Information
language English
publishDate 2017
publisher PeerJ Inc.
record_format MEDLINE/PubMed
spelling pubmed-57124692017-12-04 Examining publication bias—a simulation-based evaluation of statistical tests on publication bias Schneck, Andreas PeerJ Ethical Issues BACKGROUND: Publication bias is a form of scientific misconduct. It threatens the validity of research results and the credibility of science. Although several tests on publication bias exist, no in-depth evaluations are available that examine which test performs best for different research settings. METHODS: Four tests on publication bias, Egger’s test (FAT), p-uniform, the test of excess significance (TES), as well as the caliper test, were evaluated in a Monte Carlo simulation. Two different types of publication bias and its degree (0%, 50%, 100%) were simulated. The type of publication bias was defined either as file-drawer, meaning the repeated analysis of new datasets, or p-hacking, meaning the inclusion of covariates in order to obtain a significant result. In addition, the underlying effect (β = 0, 0.5, 1, 1.5), effect heterogeneity, the number of observations in the simulated primary studies (N = 100, 500), and the number of observations for the publication bias tests (K = 100, 1,000) were varied. RESULTS: All tests evaluated were able to identify publication bias both in the file-drawer and p-hacking condition. The false positive rates were, with the exception of the 15%- and 20%-caliper test, unbiased. The FAT had the largest statistical power in the file-drawer conditions, whereas under p-hacking the TES was, except under effect heterogeneity, slightly better. The CTs were, however, inferior to the other tests under effect homogeneity and had a decent statistical power only in conditions with 1,000 primary studies. DISCUSSION: The FAT is recommended as a test for publication bias in standard meta-analyses with no or only small effect heterogeneity. If two-sided publication bias is suspected as well as under p-hacking the TES is the first alternative to the FAT. The 5%-caliper test is recommended under conditions of effect heterogeneity and a large number of primary studies, which may be found if publication bias is examined in a discipline-wide setting when primary studies cover different research problems. PeerJ Inc. 2017-11-30 /pmc/articles/PMC5712469/ /pubmed/29204324 http://dx.doi.org/10.7717/peerj.4115 Text en ©2017 Schneck http://creativecommons.org/licenses/by/4.0/ This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0/) , which permits unrestricted use, distribution, reproduction and adaptation in any medium and for any purpose provided that it is properly attributed. For attribution, the original author(s), title, publication source (PeerJ) and either DOI or URL of the article must be cited.
spellingShingle Ethical Issues
Schneck, Andreas
Examining publication bias—a simulation-based evaluation of statistical tests on publication bias
title Examining publication bias—a simulation-based evaluation of statistical tests on publication bias
title_full Examining publication bias—a simulation-based evaluation of statistical tests on publication bias
title_fullStr Examining publication bias—a simulation-based evaluation of statistical tests on publication bias
title_full_unstemmed Examining publication bias—a simulation-based evaluation of statistical tests on publication bias
title_short Examining publication bias—a simulation-based evaluation of statistical tests on publication bias
title_sort examining publication bias—a simulation-based evaluation of statistical tests on publication bias
topic Ethical Issues
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5712469/
https://www.ncbi.nlm.nih.gov/pubmed/29204324
http://dx.doi.org/10.7717/peerj.4115
work_keys_str_mv AT schneckandreas examiningpublicationbiasasimulationbasedevaluationofstatisticaltestsonpublicationbias