Cargando…
Assessing Discriminative Performance at External Validation of Clinical Prediction Models
INTRODUCTION: External validation studies are essential to study the generalizability of prediction models. Recently a permutation test, focusing on discrimination as quantified by the c-statistic, was proposed to judge whether a prediction model is transportable to a new setting. We aimed to evalua...
Autores principales: | , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Public Library of Science
2016
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4755533/ https://www.ncbi.nlm.nih.gov/pubmed/26881753 http://dx.doi.org/10.1371/journal.pone.0148820 |
_version_ | 1782416203555274752 |
---|---|
author | Nieboer, Daan van der Ploeg, Tjeerd Steyerberg, Ewout W. |
author_facet | Nieboer, Daan van der Ploeg, Tjeerd Steyerberg, Ewout W. |
author_sort | Nieboer, Daan |
collection | PubMed |
description | INTRODUCTION: External validation studies are essential to study the generalizability of prediction models. Recently a permutation test, focusing on discrimination as quantified by the c-statistic, was proposed to judge whether a prediction model is transportable to a new setting. We aimed to evaluate this test and compare it to previously proposed procedures to judge any changes in c-statistic from development to external validation setting. METHODS: We compared the use of the permutation test to the use of benchmark values of the c-statistic following from a previously proposed framework to judge transportability of a prediction model. In a simulation study we developed a prediction model with logistic regression on a development set and validated them in the validation set. We concentrated on two scenarios: 1) the case-mix was more heterogeneous and predictor effects were weaker in the validation set compared to the development set, and 2) the case-mix was less heterogeneous in the validation set and predictor effects were identical in the validation and development set. Furthermore we illustrated the methods in a case study using 15 datasets of patients suffering from traumatic brain injury. RESULTS: The permutation test indicated that the validation and development set were homogenous in scenario 1 (in almost all simulated samples) and heterogeneous in scenario 2 (in 17%-39% of simulated samples). Previously proposed benchmark values of the c-statistic and the standard deviation of the linear predictors correctly pointed at the more heterogeneous case-mix in scenario 1 and the less heterogeneous case-mix in scenario 2. CONCLUSION: The recently proposed permutation test may provide misleading results when externally validating prediction models in the presence of case-mix differences between the development and validation population. To correctly interpret the c-statistic found at external validation it is crucial to disentangle case-mix differences from incorrect regression coefficients. |
format | Online Article Text |
id | pubmed-4755533 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2016 |
publisher | Public Library of Science |
record_format | MEDLINE/PubMed |
spelling | pubmed-47555332016-02-26 Assessing Discriminative Performance at External Validation of Clinical Prediction Models Nieboer, Daan van der Ploeg, Tjeerd Steyerberg, Ewout W. PLoS One Research Article INTRODUCTION: External validation studies are essential to study the generalizability of prediction models. Recently a permutation test, focusing on discrimination as quantified by the c-statistic, was proposed to judge whether a prediction model is transportable to a new setting. We aimed to evaluate this test and compare it to previously proposed procedures to judge any changes in c-statistic from development to external validation setting. METHODS: We compared the use of the permutation test to the use of benchmark values of the c-statistic following from a previously proposed framework to judge transportability of a prediction model. In a simulation study we developed a prediction model with logistic regression on a development set and validated them in the validation set. We concentrated on two scenarios: 1) the case-mix was more heterogeneous and predictor effects were weaker in the validation set compared to the development set, and 2) the case-mix was less heterogeneous in the validation set and predictor effects were identical in the validation and development set. Furthermore we illustrated the methods in a case study using 15 datasets of patients suffering from traumatic brain injury. RESULTS: The permutation test indicated that the validation and development set were homogenous in scenario 1 (in almost all simulated samples) and heterogeneous in scenario 2 (in 17%-39% of simulated samples). Previously proposed benchmark values of the c-statistic and the standard deviation of the linear predictors correctly pointed at the more heterogeneous case-mix in scenario 1 and the less heterogeneous case-mix in scenario 2. CONCLUSION: The recently proposed permutation test may provide misleading results when externally validating prediction models in the presence of case-mix differences between the development and validation population. To correctly interpret the c-statistic found at external validation it is crucial to disentangle case-mix differences from incorrect regression coefficients. Public Library of Science 2016-02-16 /pmc/articles/PMC4755533/ /pubmed/26881753 http://dx.doi.org/10.1371/journal.pone.0148820 Text en © 2016 Nieboer et al http://creativecommons.org/licenses/by/4.0/ This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0/) , which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. |
spellingShingle | Research Article Nieboer, Daan van der Ploeg, Tjeerd Steyerberg, Ewout W. Assessing Discriminative Performance at External Validation of Clinical Prediction Models |
title | Assessing Discriminative Performance at External Validation of Clinical Prediction Models |
title_full | Assessing Discriminative Performance at External Validation of Clinical Prediction Models |
title_fullStr | Assessing Discriminative Performance at External Validation of Clinical Prediction Models |
title_full_unstemmed | Assessing Discriminative Performance at External Validation of Clinical Prediction Models |
title_short | Assessing Discriminative Performance at External Validation of Clinical Prediction Models |
title_sort | assessing discriminative performance at external validation of clinical prediction models |
topic | Research Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4755533/ https://www.ncbi.nlm.nih.gov/pubmed/26881753 http://dx.doi.org/10.1371/journal.pone.0148820 |
work_keys_str_mv | AT nieboerdaan assessingdiscriminativeperformanceatexternalvalidationofclinicalpredictionmodels AT vanderploegtjeerd assessingdiscriminativeperformanceatexternalvalidationofclinicalpredictionmodels AT steyerbergewoutw assessingdiscriminativeperformanceatexternalvalidationofclinicalpredictionmodels |