Cargando…

Big data, observational research and P-value: a recipe for false-positive findings? A study of simulated and real prospective cohorts

BACKGROUND: An increasing number of observational studies combine large sample sizes with low participation rates, which could lead to standard inference failing to control the false-discovery rate. We investigated if the ‘empirical calibration of P-value’ method (EPCV), reliant on negative controls...

Descripción completa

Detalles Bibliográficos
Autores principales: Veronesi, Giovanni, Grassi, Guido, Savelli, Giordano, Quatto, Piero, Zambon, Antonella
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Oxford University Press 2020
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7394945/
https://www.ncbi.nlm.nih.gov/pubmed/31620789
http://dx.doi.org/10.1093/ije/dyz206
_version_ 1783565306586726400
author Veronesi, Giovanni
Grassi, Guido
Savelli, Giordano
Quatto, Piero
Zambon, Antonella
author_facet Veronesi, Giovanni
Grassi, Guido
Savelli, Giordano
Quatto, Piero
Zambon, Antonella
author_sort Veronesi, Giovanni
collection PubMed
description BACKGROUND: An increasing number of observational studies combine large sample sizes with low participation rates, which could lead to standard inference failing to control the false-discovery rate. We investigated if the ‘empirical calibration of P-value’ method (EPCV), reliant on negative controls, can preserve type I error in the context of survival analysis. METHODS: We used simulated cohort studies with 50% participation rate and two different selection bias mechanisms, and a real-life application on predictors of cancer mortality using data from four population-based cohorts in Northern Italy (n = 6976 men and women aged 25–74 years at baseline and 17 years of median follow-up). RESULTS: Type I error for the standard Cox model was above the 5% nominal level in 15 out of 16 simulated settings; for n = 10 000, the chances of a null association with hazard ratio = 1.05 having a P-value < 0.05 were 42.5%. Conversely, EPCV with 10 negative controls preserved the 5% nominal level in all the simulation settings, reducing bias in the point estimate by 80–90% when its main assumption was verified. In the real case, 15 out of 21 (71%) blood markers with no association with cancer mortality according to literature had a P-value < 0.05 in age- and gender-adjusted Cox models. After calibration, only 1 (4.8%) remained statistically significant. CONCLUSIONS: In the analyses of large observational studies prone to selection bias, the use of empirical distribution to calibrate P-values can substantially reduce the number of trivial results needing further screening for relevance and external validity.
format Online
Article
Text
id pubmed-7394945
institution National Center for Biotechnology Information
language English
publishDate 2020
publisher Oxford University Press
record_format MEDLINE/PubMed
spelling pubmed-73949452020-08-04 Big data, observational research and P-value: a recipe for false-positive findings? A study of simulated and real prospective cohorts Veronesi, Giovanni Grassi, Guido Savelli, Giordano Quatto, Piero Zambon, Antonella Int J Epidemiol P Values BACKGROUND: An increasing number of observational studies combine large sample sizes with low participation rates, which could lead to standard inference failing to control the false-discovery rate. We investigated if the ‘empirical calibration of P-value’ method (EPCV), reliant on negative controls, can preserve type I error in the context of survival analysis. METHODS: We used simulated cohort studies with 50% participation rate and two different selection bias mechanisms, and a real-life application on predictors of cancer mortality using data from four population-based cohorts in Northern Italy (n = 6976 men and women aged 25–74 years at baseline and 17 years of median follow-up). RESULTS: Type I error for the standard Cox model was above the 5% nominal level in 15 out of 16 simulated settings; for n = 10 000, the chances of a null association with hazard ratio = 1.05 having a P-value < 0.05 were 42.5%. Conversely, EPCV with 10 negative controls preserved the 5% nominal level in all the simulation settings, reducing bias in the point estimate by 80–90% when its main assumption was verified. In the real case, 15 out of 21 (71%) blood markers with no association with cancer mortality according to literature had a P-value < 0.05 in age- and gender-adjusted Cox models. After calibration, only 1 (4.8%) remained statistically significant. CONCLUSIONS: In the analyses of large observational studies prone to selection bias, the use of empirical distribution to calibrate P-values can substantially reduce the number of trivial results needing further screening for relevance and external validity. Oxford University Press 2020-06 2019-10-16 /pmc/articles/PMC7394945/ /pubmed/31620789 http://dx.doi.org/10.1093/ije/dyz206 Text en © The Author(s) 2019. Published by Oxford University Press on behalf of the International Epidemiological Association. http://creativecommons.org/licenses/by-nc/4.0/ This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/4.0/), which permits non-commercial re-use, distribution, and reproduction in any medium, provided the original work is properly cited. For commercial re-use, please contact journals.permissions@oup.com
spellingShingle P Values
Veronesi, Giovanni
Grassi, Guido
Savelli, Giordano
Quatto, Piero
Zambon, Antonella
Big data, observational research and P-value: a recipe for false-positive findings? A study of simulated and real prospective cohorts
title Big data, observational research and P-value: a recipe for false-positive findings? A study of simulated and real prospective cohorts
title_full Big data, observational research and P-value: a recipe for false-positive findings? A study of simulated and real prospective cohorts
title_fullStr Big data, observational research and P-value: a recipe for false-positive findings? A study of simulated and real prospective cohorts
title_full_unstemmed Big data, observational research and P-value: a recipe for false-positive findings? A study of simulated and real prospective cohorts
title_short Big data, observational research and P-value: a recipe for false-positive findings? A study of simulated and real prospective cohorts
title_sort big data, observational research and p-value: a recipe for false-positive findings? a study of simulated and real prospective cohorts
topic P Values
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7394945/
https://www.ncbi.nlm.nih.gov/pubmed/31620789
http://dx.doi.org/10.1093/ije/dyz206
work_keys_str_mv AT veronesigiovanni bigdataobservationalresearchandpvaluearecipeforfalsepositivefindingsastudyofsimulatedandrealprospectivecohorts
AT grassiguido bigdataobservationalresearchandpvaluearecipeforfalsepositivefindingsastudyofsimulatedandrealprospectivecohorts
AT savelligiordano bigdataobservationalresearchandpvaluearecipeforfalsepositivefindingsastudyofsimulatedandrealprospectivecohorts
AT quattopiero bigdataobservationalresearchandpvaluearecipeforfalsepositivefindingsastudyofsimulatedandrealprospectivecohorts
AT zambonantonella bigdataobservationalresearchandpvaluearecipeforfalsepositivefindingsastudyofsimulatedandrealprospectivecohorts