Cargando…

Bayesian evaluation of effect size after replicating an original study

The vast majority of published results in the literature is statistically significant, which raises concerns about their reliability. The Reproducibility Project Psychology (RPP) and Experimental Economics Replication Project (EE-RP) both replicated a large number of published studies in psychology...

Descripción completa

Detalles Bibliográficos
Autores principales: van Aert, Robbie C. M., van Assen, Marcel A. L. M.
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Public Library of Science 2017
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5384677/
https://www.ncbi.nlm.nih.gov/pubmed/28388646
http://dx.doi.org/10.1371/journal.pone.0175302
_version_ 1782520484357734400
author van Aert, Robbie C. M.
van Assen, Marcel A. L. M.
author_facet van Aert, Robbie C. M.
van Assen, Marcel A. L. M.
author_sort van Aert, Robbie C. M.
collection PubMed
description The vast majority of published results in the literature is statistically significant, which raises concerns about their reliability. The Reproducibility Project Psychology (RPP) and Experimental Economics Replication Project (EE-RP) both replicated a large number of published studies in psychology and economics. The original study and replication were statistically significant in 36.1% in RPP and 68.8% in EE-RP suggesting many null effects among the replicated studies. However, evidence in favor of the null hypothesis cannot be examined with null hypothesis significance testing. We developed a Bayesian meta-analysis method called snapshot hybrid that is easy to use and understand and quantifies the amount of evidence in favor of a zero, small, medium and large effect. The method computes posterior model probabilities for a zero, small, medium, and large effect and adjusts for publication bias by taking into account that the original study is statistically significant. We first analytically approximate the methods performance, and demonstrate the necessity to control for the original study’s significance to enable the accumulation of evidence for a true zero effect. Then we applied the method to the data of RPP and EE-RP, showing that the underlying effect sizes of the included studies in EE-RP are generally larger than in RPP, but that the sample sizes of especially the included studies in RPP are often too small to draw definite conclusions about the true effect size. We also illustrate how snapshot hybrid can be used to determine the required sample size of the replication akin to power analysis in null hypothesis significance testing and present an easy to use web application (https://rvanaert.shinyapps.io/snapshot/) and R code for applying the method.
format Online
Article
Text
id pubmed-5384677
institution National Center for Biotechnology Information
language English
publishDate 2017
publisher Public Library of Science
record_format MEDLINE/PubMed
spelling pubmed-53846772017-05-03 Bayesian evaluation of effect size after replicating an original study van Aert, Robbie C. M. van Assen, Marcel A. L. M. PLoS One Research Article The vast majority of published results in the literature is statistically significant, which raises concerns about their reliability. The Reproducibility Project Psychology (RPP) and Experimental Economics Replication Project (EE-RP) both replicated a large number of published studies in psychology and economics. The original study and replication were statistically significant in 36.1% in RPP and 68.8% in EE-RP suggesting many null effects among the replicated studies. However, evidence in favor of the null hypothesis cannot be examined with null hypothesis significance testing. We developed a Bayesian meta-analysis method called snapshot hybrid that is easy to use and understand and quantifies the amount of evidence in favor of a zero, small, medium and large effect. The method computes posterior model probabilities for a zero, small, medium, and large effect and adjusts for publication bias by taking into account that the original study is statistically significant. We first analytically approximate the methods performance, and demonstrate the necessity to control for the original study’s significance to enable the accumulation of evidence for a true zero effect. Then we applied the method to the data of RPP and EE-RP, showing that the underlying effect sizes of the included studies in EE-RP are generally larger than in RPP, but that the sample sizes of especially the included studies in RPP are often too small to draw definite conclusions about the true effect size. We also illustrate how snapshot hybrid can be used to determine the required sample size of the replication akin to power analysis in null hypothesis significance testing and present an easy to use web application (https://rvanaert.shinyapps.io/snapshot/) and R code for applying the method. Public Library of Science 2017-04-07 /pmc/articles/PMC5384677/ /pubmed/28388646 http://dx.doi.org/10.1371/journal.pone.0175302 Text en © 2017 van Aert, van Assen http://creativecommons.org/licenses/by/4.0/ This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0/) , which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
spellingShingle Research Article
van Aert, Robbie C. M.
van Assen, Marcel A. L. M.
Bayesian evaluation of effect size after replicating an original study
title Bayesian evaluation of effect size after replicating an original study
title_full Bayesian evaluation of effect size after replicating an original study
title_fullStr Bayesian evaluation of effect size after replicating an original study
title_full_unstemmed Bayesian evaluation of effect size after replicating an original study
title_short Bayesian evaluation of effect size after replicating an original study
title_sort bayesian evaluation of effect size after replicating an original study
topic Research Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5384677/
https://www.ncbi.nlm.nih.gov/pubmed/28388646
http://dx.doi.org/10.1371/journal.pone.0175302
work_keys_str_mv AT vanaertrobbiecm bayesianevaluationofeffectsizeafterreplicatinganoriginalstudy
AT vanassenmarcelalm bayesianevaluationofeffectsizeafterreplicatinganoriginalstudy