Cargando…

Expected clinical utility of automatable prediction models for improving palliative and end-of-life care outcomes: Toward routine decision analysis before implementation

OBJECTIVE: The study sought to evaluate the expected clinical utility of automatable prediction models for increasing goals-of-care discussions (GOCDs) among hospitalized patients at the end of life (EOL). MATERIALS AND METHODS: We built a decision model from the perspective of clinicians who aim to...

Descripción completa

Detalles Bibliográficos
Autores principales: Taseen, Ryeyan, Ethier, Jean-François
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Oxford University Press 2021
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8510333/
https://www.ncbi.nlm.nih.gov/pubmed/34472611
http://dx.doi.org/10.1093/jamia/ocab140
Descripción
Sumario:OBJECTIVE: The study sought to evaluate the expected clinical utility of automatable prediction models for increasing goals-of-care discussions (GOCDs) among hospitalized patients at the end of life (EOL). MATERIALS AND METHODS: We built a decision model from the perspective of clinicians who aim to increase GOCDs at the EOL using an automated alert system. The alternative strategies were 4 prediction models—3 random forest models and the Modified Hospital One-year Mortality Risk model—to generate alerts for patients at a high risk of 1-year mortality. They were trained on admissions from 2011 to 2016 (70 788 patients) and tested with admissions from 2017-2018 (16 490 patients). GOCDs occurring in usual care were measured with code status orders. We calculated the expected risk difference (beneficial outcomes with alerts minus beneficial outcomes without alerts among those at the EOL), the number needed to benefit (number of alerts needed to increase benefit over usual care by 1 outcome), and the net benefit (benefit minus cost) of each strategy. RESULTS: Models had a C-statistic between 0.79 and 0.86. A code status order occurred during 2599 of 3773 (69%) hospitalizations at the EOL. At a risk threshold corresponding to an alert prevalence of 10%, the expected risk difference ranged from 5.4% to 10.7% and the number needed to benefit ranged from 5.4 to 10.9 alerts. Using revealed preferences, only 2 models improved net benefit over usual care. A random forest model with diagnostic predictors had the highest expected value, including in sensitivity analyses. DISCUSSION: Prediction models with acceptable predictive validity differed meaningfully in their ability to improve over usual decision making. CONCLUSIONS: An evaluation of clinical utility, such as by using decision curve analysis, is recommended after validating a prediction model because metrics of model predictiveness, such as the C-statistic, are not informative of clinical value.