Cargando…

External validation of existing dementia prediction models on observational health data

BACKGROUND: Many dementia prediction models have been developed, but only few have been externally validated, which hinders clinical uptake and may pose a risk if models are applied to actual patients regardless. Externally validating an existing prediction model is a difficult task, where we mostly...

Descripción completa

Detalles Bibliográficos
Autores principales: John, Luis H., Kors, Jan A., Fridgeirsson, Egill A., Reps, Jenna M., Rijnbeek, Peter R.
Formato: Online Artículo Texto
Lenguaje:English
Publicado: BioMed Central 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9720950/
https://www.ncbi.nlm.nih.gov/pubmed/36471238
http://dx.doi.org/10.1186/s12874-022-01793-5
_version_ 1784843658547167232
author John, Luis H.
Kors, Jan A.
Fridgeirsson, Egill A.
Reps, Jenna M.
Rijnbeek, Peter R.
author_facet John, Luis H.
Kors, Jan A.
Fridgeirsson, Egill A.
Reps, Jenna M.
Rijnbeek, Peter R.
author_sort John, Luis H.
collection PubMed
description BACKGROUND: Many dementia prediction models have been developed, but only few have been externally validated, which hinders clinical uptake and may pose a risk if models are applied to actual patients regardless. Externally validating an existing prediction model is a difficult task, where we mostly rely on the completeness of model reporting in a published article. In this study, we aim to externally validate existing dementia prediction models. To that end, we define model reporting criteria, review published studies, and externally validate three well reported models using routinely collected health data from administrative claims and electronic health records. METHODS: We identified dementia prediction models that were developed between 2011 and 2020 and assessed if they could be externally validated given a set of model criteria. In addition, we externally validated three of these models (Walters’ Dementia Risk Score, Mehta’s RxDx-Dementia Risk Index, and Nori’s ADRD dementia prediction model) on a network of six observational health databases from the United States, United Kingdom, Germany and the Netherlands, including the original development databases of the models. RESULTS: We reviewed 59 dementia prediction models. All models reported the prediction method, development database, and target and outcome definitions. Less frequently reported by these 59 prediction models were predictor definitions (52 models) including the time window in which a predictor is assessed (21 models), predictor coefficients (20 models), and the time-at-risk (42 models). The validation of the model by Walters (development c-statistic: 0.84) showed moderate transportability (0.67–0.76 c-statistic). The Mehta model (development c-statistic: 0.81) transported well to some of the external databases (0.69–0.79 c-statistic). The Nori model (development AUROC: 0.69) transported well (0.62–0.68 AUROC) but performed modestly overall. Recalibration showed improvements for the Walters and Nori models, while recalibration could not be assessed for the Mehta model due to unreported baseline hazard. CONCLUSION: We observed that reporting is mostly insufficient to fully externally validate published dementia prediction models, and therefore, it is uncertain how well these models would work in other clinical settings. We emphasize the importance of following established guidelines for reporting clinical prediction models. We recommend that reporting should be more explicit and have external validation in mind if the model is meant to be applied in different settings. SUPPLEMENTARY INFORMATION: The online version contains supplementary material available at 10.1186/s12874-022-01793-5.
format Online
Article
Text
id pubmed-9720950
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher BioMed Central
record_format MEDLINE/PubMed
spelling pubmed-97209502022-12-06 External validation of existing dementia prediction models on observational health data John, Luis H. Kors, Jan A. Fridgeirsson, Egill A. Reps, Jenna M. Rijnbeek, Peter R. BMC Med Res Methodol Research BACKGROUND: Many dementia prediction models have been developed, but only few have been externally validated, which hinders clinical uptake and may pose a risk if models are applied to actual patients regardless. Externally validating an existing prediction model is a difficult task, where we mostly rely on the completeness of model reporting in a published article. In this study, we aim to externally validate existing dementia prediction models. To that end, we define model reporting criteria, review published studies, and externally validate three well reported models using routinely collected health data from administrative claims and electronic health records. METHODS: We identified dementia prediction models that were developed between 2011 and 2020 and assessed if they could be externally validated given a set of model criteria. In addition, we externally validated three of these models (Walters’ Dementia Risk Score, Mehta’s RxDx-Dementia Risk Index, and Nori’s ADRD dementia prediction model) on a network of six observational health databases from the United States, United Kingdom, Germany and the Netherlands, including the original development databases of the models. RESULTS: We reviewed 59 dementia prediction models. All models reported the prediction method, development database, and target and outcome definitions. Less frequently reported by these 59 prediction models were predictor definitions (52 models) including the time window in which a predictor is assessed (21 models), predictor coefficients (20 models), and the time-at-risk (42 models). The validation of the model by Walters (development c-statistic: 0.84) showed moderate transportability (0.67–0.76 c-statistic). The Mehta model (development c-statistic: 0.81) transported well to some of the external databases (0.69–0.79 c-statistic). The Nori model (development AUROC: 0.69) transported well (0.62–0.68 AUROC) but performed modestly overall. Recalibration showed improvements for the Walters and Nori models, while recalibration could not be assessed for the Mehta model due to unreported baseline hazard. CONCLUSION: We observed that reporting is mostly insufficient to fully externally validate published dementia prediction models, and therefore, it is uncertain how well these models would work in other clinical settings. We emphasize the importance of following established guidelines for reporting clinical prediction models. We recommend that reporting should be more explicit and have external validation in mind if the model is meant to be applied in different settings. SUPPLEMENTARY INFORMATION: The online version contains supplementary material available at 10.1186/s12874-022-01793-5. BioMed Central 2022-12-05 /pmc/articles/PMC9720950/ /pubmed/36471238 http://dx.doi.org/10.1186/s12874-022-01793-5 Text en © The Author(s) 2022 https://creativecommons.org/licenses/by/4.0/Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) . The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/ (https://creativecommons.org/publicdomain/zero/1.0/) ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.
spellingShingle Research
John, Luis H.
Kors, Jan A.
Fridgeirsson, Egill A.
Reps, Jenna M.
Rijnbeek, Peter R.
External validation of existing dementia prediction models on observational health data
title External validation of existing dementia prediction models on observational health data
title_full External validation of existing dementia prediction models on observational health data
title_fullStr External validation of existing dementia prediction models on observational health data
title_full_unstemmed External validation of existing dementia prediction models on observational health data
title_short External validation of existing dementia prediction models on observational health data
title_sort external validation of existing dementia prediction models on observational health data
topic Research
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9720950/
https://www.ncbi.nlm.nih.gov/pubmed/36471238
http://dx.doi.org/10.1186/s12874-022-01793-5
work_keys_str_mv AT johnluish externalvalidationofexistingdementiapredictionmodelsonobservationalhealthdata
AT korsjana externalvalidationofexistingdementiapredictionmodelsonobservationalhealthdata
AT fridgeirssonegilla externalvalidationofexistingdementiapredictionmodelsonobservationalhealthdata
AT repsjennam externalvalidationofexistingdementiapredictionmodelsonobservationalhealthdata
AT rijnbeekpeterr externalvalidationofexistingdementiapredictionmodelsonobservationalhealthdata