Cargando…

Are most published research findings false in a continuous universe?

Diagnostic screening models for the interpretation of null hypothesis significance test (NHST) results have been influential in highlighting the effect of selective publication on the reproducibility of the published literature, leading to John Ioannidis’ much-cited claim that most published researc...

Descripción completa

Detalles Bibliográficos
Autores principales: Neves, Kleber, Tan, Pedro B., Amaral, Olavo B.
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Public Library of Science 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9767354/
https://www.ncbi.nlm.nih.gov/pubmed/36538521
http://dx.doi.org/10.1371/journal.pone.0277935
_version_ 1784853948459384832
author Neves, Kleber
Tan, Pedro B.
Amaral, Olavo B.
author_facet Neves, Kleber
Tan, Pedro B.
Amaral, Olavo B.
author_sort Neves, Kleber
collection PubMed
description Diagnostic screening models for the interpretation of null hypothesis significance test (NHST) results have been influential in highlighting the effect of selective publication on the reproducibility of the published literature, leading to John Ioannidis’ much-cited claim that most published research findings are false. These models, however, are typically based on the assumption that hypotheses are dichotomously true or false, without considering that effect sizes for different hypotheses are not the same. To address this limitation, we develop a simulation model that overcomes this by modeling effect sizes explicitly using different continuous distributions, while retaining other aspects of previous models such as publication bias and the pursuit of statistical significance. Our results show that the combination of selective publication, bias, low statistical power and unlikely hypotheses consistently leads to high proportions of false positives, irrespective of the effect size distribution assumed. Using continuous effect sizes also allows us to evaluate the degree of effect size overestimation and prevalence of estimates with the wrong sign in the literature, showing that the same factors that drive false-positive results also lead to errors in estimating effect size direction and magnitude. Nevertheless, the relative influence of these factors on different metrics varies depending on the distribution assumed for effect sizes. The model is made available as an R ShinyApp interface, allowing one to explore features of the literature in various scenarios.
format Online
Article
Text
id pubmed-9767354
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher Public Library of Science
record_format MEDLINE/PubMed
spelling pubmed-97673542022-12-21 Are most published research findings false in a continuous universe? Neves, Kleber Tan, Pedro B. Amaral, Olavo B. PLoS One Research Article Diagnostic screening models for the interpretation of null hypothesis significance test (NHST) results have been influential in highlighting the effect of selective publication on the reproducibility of the published literature, leading to John Ioannidis’ much-cited claim that most published research findings are false. These models, however, are typically based on the assumption that hypotheses are dichotomously true or false, without considering that effect sizes for different hypotheses are not the same. To address this limitation, we develop a simulation model that overcomes this by modeling effect sizes explicitly using different continuous distributions, while retaining other aspects of previous models such as publication bias and the pursuit of statistical significance. Our results show that the combination of selective publication, bias, low statistical power and unlikely hypotheses consistently leads to high proportions of false positives, irrespective of the effect size distribution assumed. Using continuous effect sizes also allows us to evaluate the degree of effect size overestimation and prevalence of estimates with the wrong sign in the literature, showing that the same factors that drive false-positive results also lead to errors in estimating effect size direction and magnitude. Nevertheless, the relative influence of these factors on different metrics varies depending on the distribution assumed for effect sizes. The model is made available as an R ShinyApp interface, allowing one to explore features of the literature in various scenarios. Public Library of Science 2022-12-20 /pmc/articles/PMC9767354/ /pubmed/36538521 http://dx.doi.org/10.1371/journal.pone.0277935 Text en © 2022 Neves et al https://creativecommons.org/licenses/by/4.0/This is an open access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/) , which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
spellingShingle Research Article
Neves, Kleber
Tan, Pedro B.
Amaral, Olavo B.
Are most published research findings false in a continuous universe?
title Are most published research findings false in a continuous universe?
title_full Are most published research findings false in a continuous universe?
title_fullStr Are most published research findings false in a continuous universe?
title_full_unstemmed Are most published research findings false in a continuous universe?
title_short Are most published research findings false in a continuous universe?
title_sort are most published research findings false in a continuous universe?
topic Research Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9767354/
https://www.ncbi.nlm.nih.gov/pubmed/36538521
http://dx.doi.org/10.1371/journal.pone.0277935
work_keys_str_mv AT neveskleber aremostpublishedresearchfindingsfalseinacontinuousuniverse
AT tanpedrob aremostpublishedresearchfindingsfalseinacontinuousuniverse
AT amaralolavob aremostpublishedresearchfindingsfalseinacontinuousuniverse