Cargando…

Pediatric prognostic models predicting inhospital child mortality in resource‐limited settings: An external validation study

BACKGROUND AND AIMS: Prognostic models provide evidence‐based predictions and estimates of future outcomes, facilitating decision‐making, patient care, and research. A few of these models have been externally validated, leading to uncertain reliability and generalizability. This study aims to extern...

Descripción completa

Detalles Bibliográficos
Autores principales: Ogero, Morris, Ndiritu, John, Sarguta, Rachel, Tuti, Timothy, Akech, Samuel
Formato: Online Artículo Texto
Lenguaje:English
Publicado: John Wiley and Sons Inc. 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10460931/
https://www.ncbi.nlm.nih.gov/pubmed/37645032
http://dx.doi.org/10.1002/hsr2.1433
Descripción
Sumario:BACKGROUND AND AIMS: Prognostic models provide evidence‐based predictions and estimates of future outcomes, facilitating decision‐making, patient care, and research. A few of these models have been externally validated, leading to uncertain reliability and generalizability. This study aims to externally validate four models to assess their transferability and usefulness in clinical practice. The models include the respiratory index of severity in children (RISC)‐Malawi model and three other models by Lowlavaar et al. METHODS: The study used data from the Clinical Information Network (CIN) to validate the four models where the primary outcome was in‐hospital mortality. 163,329 patients met eligibility criteria. Missing data were imputed, and the logistic function was used to compute predicted risk of in‐hospital mortality. Models' discriminatory ability and calibration were determined using area under the curve (AUC), calibration slope, and intercept. RESULTS: The RISC‐Malawi model had 50,669 pneumonia patients who met the eligibility criteria, of which the case‐fatality ratio was 4406 (8.7%). Its AUC was 0.77 (95% CI: 0.77−0.78), whereas the calibration slope was 1.04 (95% CI: 1.00 −1.06), and calibration intercept was 0.81 (95% CI: 0.77−0.84). Regarding the external validation of Lowlavaar et al. models, 10,782 eligible patients  were included, with an in‐hospital mortality rate of 5.3%. The primary model's AUC was 0.75 (95% CI: 0.72−0.77), the calibration slope was 0.78 (95% CI: 0.71−0.84), and the calibration intercept was 0.37 (95% CI: 0.28−0.46). All models markedly underestimated the risk of mortality. CONCLUSION: All externally validated models exhibited either underestimation or overestimation of the risk as judged from calibration statistics. Hence, applying these models with confidence in settings other than their original development context may not be advisable. Our findings strongly suggest the need for recalibrating these model to enhance their generalizability.