Cargando…
Statistical Significance Filtering Overestimates Effects and Impedes Falsification: A Critique of Endsley (2019)
Whether in meta-analysis or single experiments, selecting results based on statistical significance leads to overestimated effect sizes, impeding falsification. We critique a quantitative synthesis that used significance to score and select previously published effects for situation awareness-perfor...
Autores principales: | , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Frontiers Media S.A.
2020
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7783317/ https://www.ncbi.nlm.nih.gov/pubmed/33414750 http://dx.doi.org/10.3389/fpsyg.2020.609647 |
_version_ | 1783632092278554624 |
---|---|
author | Bakdash, Jonathan Z. Marusich, Laura R. Kenworthy, Jared B. Twedt, Elyssa Zaroukian, Erin G. |
author_facet | Bakdash, Jonathan Z. Marusich, Laura R. Kenworthy, Jared B. Twedt, Elyssa Zaroukian, Erin G. |
author_sort | Bakdash, Jonathan Z. |
collection | PubMed |
description | Whether in meta-analysis or single experiments, selecting results based on statistical significance leads to overestimated effect sizes, impeding falsification. We critique a quantitative synthesis that used significance to score and select previously published effects for situation awareness-performance associations (Endsley, 2019). How much does selection using statistical significance quantitatively impact results in a meta-analytic context? We evaluate and compare results using significance-filtered effects versus analyses with all effects as-reported. Endsley reported high predictiveness scores and large positive mean correlations but used atypical methods: the hypothesis was used to select papers and effects. Papers were assigned the maximum predictiveness scores if they contained at-least-one significant effect, yet most papers reported multiple effects, and the number of non-significant effects did not impact the score. Thus, the predictiveness score was rarely less than the maximum. In addition, only significant effects were included in Endsley’s quantitative synthesis. Filtering excluded half of all reported effects, with guaranteed minimum effect sizes based on sample size. Results for filtered compared to as-reported effects clearly diverged. Compared to the mean of as-reported effects, the filtered mean was overestimated by 56%. Furthermore, 92% (or 222 out of 241) of the as-reported effects were below the mean of filtered effects. We conclude that outcome-dependent selection of effects is circular, predetermining results and running contrary to the purpose of meta-analysis. Instead of using significance to score and filter effects, meta-analyses should follow established research practices. |
format | Online Article Text |
id | pubmed-7783317 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2020 |
publisher | Frontiers Media S.A. |
record_format | MEDLINE/PubMed |
spelling | pubmed-77833172021-01-06 Statistical Significance Filtering Overestimates Effects and Impedes Falsification: A Critique of Endsley (2019) Bakdash, Jonathan Z. Marusich, Laura R. Kenworthy, Jared B. Twedt, Elyssa Zaroukian, Erin G. Front Psychol Psychology Whether in meta-analysis or single experiments, selecting results based on statistical significance leads to overestimated effect sizes, impeding falsification. We critique a quantitative synthesis that used significance to score and select previously published effects for situation awareness-performance associations (Endsley, 2019). How much does selection using statistical significance quantitatively impact results in a meta-analytic context? We evaluate and compare results using significance-filtered effects versus analyses with all effects as-reported. Endsley reported high predictiveness scores and large positive mean correlations but used atypical methods: the hypothesis was used to select papers and effects. Papers were assigned the maximum predictiveness scores if they contained at-least-one significant effect, yet most papers reported multiple effects, and the number of non-significant effects did not impact the score. Thus, the predictiveness score was rarely less than the maximum. In addition, only significant effects were included in Endsley’s quantitative synthesis. Filtering excluded half of all reported effects, with guaranteed minimum effect sizes based on sample size. Results for filtered compared to as-reported effects clearly diverged. Compared to the mean of as-reported effects, the filtered mean was overestimated by 56%. Furthermore, 92% (or 222 out of 241) of the as-reported effects were below the mean of filtered effects. We conclude that outcome-dependent selection of effects is circular, predetermining results and running contrary to the purpose of meta-analysis. Instead of using significance to score and filter effects, meta-analyses should follow established research practices. Frontiers Media S.A. 2020-12-22 /pmc/articles/PMC7783317/ /pubmed/33414750 http://dx.doi.org/10.3389/fpsyg.2020.609647 Text en Copyright © 2020 Bakdash, Marusich, Kenworthy, Twedt and Zaroukian. http://creativecommons.org/licenses/by/4.0/ This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms. |
spellingShingle | Psychology Bakdash, Jonathan Z. Marusich, Laura R. Kenworthy, Jared B. Twedt, Elyssa Zaroukian, Erin G. Statistical Significance Filtering Overestimates Effects and Impedes Falsification: A Critique of Endsley (2019) |
title | Statistical Significance Filtering Overestimates Effects and Impedes Falsification: A Critique of Endsley (2019) |
title_full | Statistical Significance Filtering Overestimates Effects and Impedes Falsification: A Critique of Endsley (2019) |
title_fullStr | Statistical Significance Filtering Overestimates Effects and Impedes Falsification: A Critique of Endsley (2019) |
title_full_unstemmed | Statistical Significance Filtering Overestimates Effects and Impedes Falsification: A Critique of Endsley (2019) |
title_short | Statistical Significance Filtering Overestimates Effects and Impedes Falsification: A Critique of Endsley (2019) |
title_sort | statistical significance filtering overestimates effects and impedes falsification: a critique of endsley (2019) |
topic | Psychology |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7783317/ https://www.ncbi.nlm.nih.gov/pubmed/33414750 http://dx.doi.org/10.3389/fpsyg.2020.609647 |
work_keys_str_mv | AT bakdashjonathanz statisticalsignificancefilteringoverestimateseffectsandimpedesfalsificationacritiqueofendsley2019 AT marusichlaurar statisticalsignificancefilteringoverestimateseffectsandimpedesfalsificationacritiqueofendsley2019 AT kenworthyjaredb statisticalsignificancefilteringoverestimateseffectsandimpedesfalsificationacritiqueofendsley2019 AT twedtelyssa statisticalsignificancefilteringoverestimateseffectsandimpedesfalsificationacritiqueofendsley2019 AT zaroukianering statisticalsignificancefilteringoverestimateseffectsandimpedesfalsificationacritiqueofendsley2019 |