Cargando…

Leave-one-out cross-validation, penalization, and differential bias of some prediction model performance measures—a simulation study

BACKGROUND: The performance of models for binary outcomes can be described by measures such as the concordance statistic (c-statistic, area under the curve), the discrimination slope, or the Brier score. At internal validation, data resampling techniques, e.g., cross-validation, are frequently emplo...

Descripción completa

Detalles Bibliográficos
Autores principales: Geroldinger, Angelika, Lusa, Lara, Nold, Mariana, Heinze, Georg
Formato: Online Artículo Texto
Lenguaje:English
Publicado: BioMed Central 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10152625/
https://www.ncbi.nlm.nih.gov/pubmed/37127679
http://dx.doi.org/10.1186/s41512-023-00146-0
_version_ 1785035782053953536
author Geroldinger, Angelika
Lusa, Lara
Nold, Mariana
Heinze, Georg
author_facet Geroldinger, Angelika
Lusa, Lara
Nold, Mariana
Heinze, Georg
author_sort Geroldinger, Angelika
collection PubMed
description BACKGROUND: The performance of models for binary outcomes can be described by measures such as the concordance statistic (c-statistic, area under the curve), the discrimination slope, or the Brier score. At internal validation, data resampling techniques, e.g., cross-validation, are frequently employed to correct for optimism in these model performance criteria. Especially with small samples or rare events, leave-one-out cross-validation is a popular choice. METHODS: Using simulations and a real data example, we compared the effect of different resampling techniques on the estimation of c-statistics, discrimination slopes, and Brier scores for three estimators of logistic regression models, including the maximum likelihood and two maximum penalized likelihood estimators. RESULTS: Our simulation study confirms earlier studies reporting that leave-one-out cross-validated c-statistics can be strongly biased towards zero. In addition, our study reveals that this bias is even more pronounced for model estimators shrinking estimated probabilities towards the observed event fraction, such as ridge regression. Leave-one-out cross-validation also provided pessimistic estimates of the discrimination slope but nearly unbiased estimates of the Brier score. CONCLUSIONS: We recommend to use leave-pair-out cross-validation, fivefold cross-validation with repetitions, the enhanced or the .632+ bootstrap to estimate c-statistics, and leave-pair-out or fivefold cross-validation to estimate discrimination slopes. SUPPLEMENTARY INFORMATION: The online version contains supplementary material available at 10.1186/s41512-023-00146-0.
format Online
Article
Text
id pubmed-10152625
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher BioMed Central
record_format MEDLINE/PubMed
spelling pubmed-101526252023-05-03 Leave-one-out cross-validation, penalization, and differential bias of some prediction model performance measures—a simulation study Geroldinger, Angelika Lusa, Lara Nold, Mariana Heinze, Georg Diagn Progn Res Methodology BACKGROUND: The performance of models for binary outcomes can be described by measures such as the concordance statistic (c-statistic, area under the curve), the discrimination slope, or the Brier score. At internal validation, data resampling techniques, e.g., cross-validation, are frequently employed to correct for optimism in these model performance criteria. Especially with small samples or rare events, leave-one-out cross-validation is a popular choice. METHODS: Using simulations and a real data example, we compared the effect of different resampling techniques on the estimation of c-statistics, discrimination slopes, and Brier scores for three estimators of logistic regression models, including the maximum likelihood and two maximum penalized likelihood estimators. RESULTS: Our simulation study confirms earlier studies reporting that leave-one-out cross-validated c-statistics can be strongly biased towards zero. In addition, our study reveals that this bias is even more pronounced for model estimators shrinking estimated probabilities towards the observed event fraction, such as ridge regression. Leave-one-out cross-validation also provided pessimistic estimates of the discrimination slope but nearly unbiased estimates of the Brier score. CONCLUSIONS: We recommend to use leave-pair-out cross-validation, fivefold cross-validation with repetitions, the enhanced or the .632+ bootstrap to estimate c-statistics, and leave-pair-out or fivefold cross-validation to estimate discrimination slopes. SUPPLEMENTARY INFORMATION: The online version contains supplementary material available at 10.1186/s41512-023-00146-0. BioMed Central 2023-05-02 /pmc/articles/PMC10152625/ /pubmed/37127679 http://dx.doi.org/10.1186/s41512-023-00146-0 Text en © The Author(s) 2023 https://creativecommons.org/licenses/by/4.0/Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) .
spellingShingle Methodology
Geroldinger, Angelika
Lusa, Lara
Nold, Mariana
Heinze, Georg
Leave-one-out cross-validation, penalization, and differential bias of some prediction model performance measures—a simulation study
title Leave-one-out cross-validation, penalization, and differential bias of some prediction model performance measures—a simulation study
title_full Leave-one-out cross-validation, penalization, and differential bias of some prediction model performance measures—a simulation study
title_fullStr Leave-one-out cross-validation, penalization, and differential bias of some prediction model performance measures—a simulation study
title_full_unstemmed Leave-one-out cross-validation, penalization, and differential bias of some prediction model performance measures—a simulation study
title_short Leave-one-out cross-validation, penalization, and differential bias of some prediction model performance measures—a simulation study
title_sort leave-one-out cross-validation, penalization, and differential bias of some prediction model performance measures—a simulation study
topic Methodology
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10152625/
https://www.ncbi.nlm.nih.gov/pubmed/37127679
http://dx.doi.org/10.1186/s41512-023-00146-0
work_keys_str_mv AT geroldingerangelika leaveoneoutcrossvalidationpenalizationanddifferentialbiasofsomepredictionmodelperformancemeasuresasimulationstudy
AT lusalara leaveoneoutcrossvalidationpenalizationanddifferentialbiasofsomepredictionmodelperformancemeasuresasimulationstudy
AT noldmariana leaveoneoutcrossvalidationpenalizationanddifferentialbiasofsomepredictionmodelperformancemeasuresasimulationstudy
AT heinzegeorg leaveoneoutcrossvalidationpenalizationanddifferentialbiasofsomepredictionmodelperformancemeasuresasimulationstudy