Cargando…

Developing more generalizable prediction models from pooled studies and large clustered data sets

Prediction models often yield inaccurate predictions for new individuals. Large data sets from pooled studies or electronic healthcare records may alleviate this with an increased sample size and variability in sample characteristics. However, existing strategies for prediction model development gen...

Descripción completa

Detalles Bibliográficos
Autores principales: de Jong, Valentijn M. T., Moons, Karel G. M., Eijkemans, Marinus J. C., Riley, Richard D., Debray, Thomas P. A.
Formato: Online Artículo Texto
Lenguaje:English
Publicado: John Wiley and Sons Inc. 2021
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8252590/
https://www.ncbi.nlm.nih.gov/pubmed/33948970
http://dx.doi.org/10.1002/sim.8981
_version_ 1783717333250867200
author de Jong, Valentijn M. T.
Moons, Karel G. M.
Eijkemans, Marinus J. C.
Riley, Richard D.
Debray, Thomas P. A.
author_facet de Jong, Valentijn M. T.
Moons, Karel G. M.
Eijkemans, Marinus J. C.
Riley, Richard D.
Debray, Thomas P. A.
author_sort de Jong, Valentijn M. T.
collection PubMed
description Prediction models often yield inaccurate predictions for new individuals. Large data sets from pooled studies or electronic healthcare records may alleviate this with an increased sample size and variability in sample characteristics. However, existing strategies for prediction model development generally do not account for heterogeneity in predictor‐outcome associations between different settings and populations. This limits the generalizability of developed models (even from large, combined, clustered data sets) and necessitates local revisions. We aim to develop methodology for producing prediction models that require less tailoring to different settings and populations. We adopt internal‐external cross‐validation to assess and reduce heterogeneity in models' predictive performance during the development. We propose a predictor selection algorithm that optimizes the (weighted) average performance while minimizing its variability across the hold‐out clusters (or studies). Predictors are added iteratively until the estimated generalizability is optimized. We illustrate this by developing a model for predicting the risk of atrial fibrillation and updating an existing one for diagnosing deep vein thrombosis, using individual participant data from 20 cohorts (N = 10 873) and 11 diagnostic studies (N = 10 014), respectively. Meta‐analysis of calibration and discrimination performance in each hold‐out cluster shows that trade‐offs between average and heterogeneity of performance occurred. Our methodology enables the assessment of heterogeneity of prediction model performance during model development in multiple or clustered data sets, thereby informing researchers on predictor selection to improve the generalizability to different settings and populations, and reduce the need for model tailoring. Our methodology has been implemented in the R package metamisc.
format Online
Article
Text
id pubmed-8252590
institution National Center for Biotechnology Information
language English
publishDate 2021
publisher John Wiley and Sons Inc.
record_format MEDLINE/PubMed
spelling pubmed-82525902021-07-09 Developing more generalizable prediction models from pooled studies and large clustered data sets de Jong, Valentijn M. T. Moons, Karel G. M. Eijkemans, Marinus J. C. Riley, Richard D. Debray, Thomas P. A. Stat Med Research Articles Prediction models often yield inaccurate predictions for new individuals. Large data sets from pooled studies or electronic healthcare records may alleviate this with an increased sample size and variability in sample characteristics. However, existing strategies for prediction model development generally do not account for heterogeneity in predictor‐outcome associations between different settings and populations. This limits the generalizability of developed models (even from large, combined, clustered data sets) and necessitates local revisions. We aim to develop methodology for producing prediction models that require less tailoring to different settings and populations. We adopt internal‐external cross‐validation to assess and reduce heterogeneity in models' predictive performance during the development. We propose a predictor selection algorithm that optimizes the (weighted) average performance while minimizing its variability across the hold‐out clusters (or studies). Predictors are added iteratively until the estimated generalizability is optimized. We illustrate this by developing a model for predicting the risk of atrial fibrillation and updating an existing one for diagnosing deep vein thrombosis, using individual participant data from 20 cohorts (N = 10 873) and 11 diagnostic studies (N = 10 014), respectively. Meta‐analysis of calibration and discrimination performance in each hold‐out cluster shows that trade‐offs between average and heterogeneity of performance occurred. Our methodology enables the assessment of heterogeneity of prediction model performance during model development in multiple or clustered data sets, thereby informing researchers on predictor selection to improve the generalizability to different settings and populations, and reduce the need for model tailoring. Our methodology has been implemented in the R package metamisc. John Wiley and Sons Inc. 2021-05-05 2021-07-10 /pmc/articles/PMC8252590/ /pubmed/33948970 http://dx.doi.org/10.1002/sim.8981 Text en © 2021 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd. https://creativecommons.org/licenses/by/4.0/This is an open access article under the terms of the http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited.
spellingShingle Research Articles
de Jong, Valentijn M. T.
Moons, Karel G. M.
Eijkemans, Marinus J. C.
Riley, Richard D.
Debray, Thomas P. A.
Developing more generalizable prediction models from pooled studies and large clustered data sets
title Developing more generalizable prediction models from pooled studies and large clustered data sets
title_full Developing more generalizable prediction models from pooled studies and large clustered data sets
title_fullStr Developing more generalizable prediction models from pooled studies and large clustered data sets
title_full_unstemmed Developing more generalizable prediction models from pooled studies and large clustered data sets
title_short Developing more generalizable prediction models from pooled studies and large clustered data sets
title_sort developing more generalizable prediction models from pooled studies and large clustered data sets
topic Research Articles
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8252590/
https://www.ncbi.nlm.nih.gov/pubmed/33948970
http://dx.doi.org/10.1002/sim.8981
work_keys_str_mv AT dejongvalentijnmt developingmoregeneralizablepredictionmodelsfrompooledstudiesandlargeclustereddatasets
AT moonskarelgm developingmoregeneralizablepredictionmodelsfrompooledstudiesandlargeclustereddatasets
AT eijkemansmarinusjc developingmoregeneralizablepredictionmodelsfrompooledstudiesandlargeclustereddatasets
AT rileyrichardd developingmoregeneralizablepredictionmodelsfrompooledstudiesandlargeclustereddatasets
AT debraythomaspa developingmoregeneralizablepredictionmodelsfrompooledstudiesandlargeclustereddatasets