Cargando…
Performance metrics for models designed to predict treatment effect
BACKGROUND: Measuring the performance of models that predict individualized treatment effect is challenging because the outcomes of two alternative treatments are inherently unobservable in one patient. The C-for-benefit was proposed to measure discriminative ability. However, measures of calibratio...
Autores principales: | , , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
BioMed Central
2023
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10329397/ https://www.ncbi.nlm.nih.gov/pubmed/37422647 http://dx.doi.org/10.1186/s12874-023-01974-w |
_version_ | 1785070010703544320 |
---|---|
author | Maas, C. C. H. M. Kent, D. M. Hughes, M. C. Dekker, R. Lingsma, H. F. van Klaveren, D. |
author_facet | Maas, C. C. H. M. Kent, D. M. Hughes, M. C. Dekker, R. Lingsma, H. F. van Klaveren, D. |
author_sort | Maas, C. C. H. M. |
collection | PubMed |
description | BACKGROUND: Measuring the performance of models that predict individualized treatment effect is challenging because the outcomes of two alternative treatments are inherently unobservable in one patient. The C-for-benefit was proposed to measure discriminative ability. However, measures of calibration and overall performance are still lacking. We aimed to propose metrics of calibration and overall performance for models predicting treatment effect in randomized clinical trials (RCTs). METHODS: Similar to the previously proposed C-for-benefit, we defined observed pairwise treatment effect as the difference between outcomes in pairs of matched patients with different treatment assignment. We match each untreated patient with the nearest treated patient based on the Mahalanobis distance between patient characteristics. Then, we define the E(avg)-for-benefit, E(50)-for-benefit, and E(90)-for-benefit as the average, median, and 90(th) quantile of the absolute distance between the predicted pairwise treatment effects and local-regression-smoothed observed pairwise treatment effects. Furthermore, we define the cross-entropy-for-benefit and Brier-for-benefit as the logarithmic and average squared distance between predicted and observed pairwise treatment effects. In a simulation study, the metric values of deliberately “perturbed models” were compared to those of the data-generating model, i.e., “optimal model”. To illustrate these performance metrics, different modeling approaches for predicting treatment effect are applied to the data of the Diabetes Prevention Program: 1) a risk modelling approach with restricted cubic splines; 2) an effect modelling approach including penalized treatment interactions; and 3) the causal forest. RESULTS: As desired, performance metric values of “perturbed models” were consistently worse than those of the “optimal model” (E(avg)-for-benefit ≥ 0.043 versus 0.002, E(50)-for-benefit ≥ 0.032 versus 0.001, E(90)-for-benefit ≥ 0.084 versus 0.004, cross-entropy-for-benefit ≥ 0.765 versus 0.750, Brier-for-benefit ≥ 0.220 versus 0.218). Calibration, discriminative ability, and overall performance of three different models were similar in the case study. The proposed metrics were implemented in a publicly available R-package “HTEPredictionMetrics”. CONCLUSION: The proposed metrics are useful to assess the calibration and overall performance of models predicting treatment effect in RCTs. SUPPLEMENTARY INFORMATION: The online version contains supplementary material available at 10.1186/s12874-023-01974-w. |
format | Online Article Text |
id | pubmed-10329397 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2023 |
publisher | BioMed Central |
record_format | MEDLINE/PubMed |
spelling | pubmed-103293972023-07-09 Performance metrics for models designed to predict treatment effect Maas, C. C. H. M. Kent, D. M. Hughes, M. C. Dekker, R. Lingsma, H. F. van Klaveren, D. BMC Med Res Methodol Research BACKGROUND: Measuring the performance of models that predict individualized treatment effect is challenging because the outcomes of two alternative treatments are inherently unobservable in one patient. The C-for-benefit was proposed to measure discriminative ability. However, measures of calibration and overall performance are still lacking. We aimed to propose metrics of calibration and overall performance for models predicting treatment effect in randomized clinical trials (RCTs). METHODS: Similar to the previously proposed C-for-benefit, we defined observed pairwise treatment effect as the difference between outcomes in pairs of matched patients with different treatment assignment. We match each untreated patient with the nearest treated patient based on the Mahalanobis distance between patient characteristics. Then, we define the E(avg)-for-benefit, E(50)-for-benefit, and E(90)-for-benefit as the average, median, and 90(th) quantile of the absolute distance between the predicted pairwise treatment effects and local-regression-smoothed observed pairwise treatment effects. Furthermore, we define the cross-entropy-for-benefit and Brier-for-benefit as the logarithmic and average squared distance between predicted and observed pairwise treatment effects. In a simulation study, the metric values of deliberately “perturbed models” were compared to those of the data-generating model, i.e., “optimal model”. To illustrate these performance metrics, different modeling approaches for predicting treatment effect are applied to the data of the Diabetes Prevention Program: 1) a risk modelling approach with restricted cubic splines; 2) an effect modelling approach including penalized treatment interactions; and 3) the causal forest. RESULTS: As desired, performance metric values of “perturbed models” were consistently worse than those of the “optimal model” (E(avg)-for-benefit ≥ 0.043 versus 0.002, E(50)-for-benefit ≥ 0.032 versus 0.001, E(90)-for-benefit ≥ 0.084 versus 0.004, cross-entropy-for-benefit ≥ 0.765 versus 0.750, Brier-for-benefit ≥ 0.220 versus 0.218). Calibration, discriminative ability, and overall performance of three different models were similar in the case study. The proposed metrics were implemented in a publicly available R-package “HTEPredictionMetrics”. CONCLUSION: The proposed metrics are useful to assess the calibration and overall performance of models predicting treatment effect in RCTs. SUPPLEMENTARY INFORMATION: The online version contains supplementary material available at 10.1186/s12874-023-01974-w. BioMed Central 2023-07-08 /pmc/articles/PMC10329397/ /pubmed/37422647 http://dx.doi.org/10.1186/s12874-023-01974-w Text en © The Author(s) 2023 https://creativecommons.org/licenses/by/4.0/Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) . The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/ (https://creativecommons.org/publicdomain/zero/1.0/) ) applies to the data made available in this article, unless otherwise stated in a credit line to the data. |
spellingShingle | Research Maas, C. C. H. M. Kent, D. M. Hughes, M. C. Dekker, R. Lingsma, H. F. van Klaveren, D. Performance metrics for models designed to predict treatment effect |
title | Performance metrics for models designed to predict treatment effect |
title_full | Performance metrics for models designed to predict treatment effect |
title_fullStr | Performance metrics for models designed to predict treatment effect |
title_full_unstemmed | Performance metrics for models designed to predict treatment effect |
title_short | Performance metrics for models designed to predict treatment effect |
title_sort | performance metrics for models designed to predict treatment effect |
topic | Research |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10329397/ https://www.ncbi.nlm.nih.gov/pubmed/37422647 http://dx.doi.org/10.1186/s12874-023-01974-w |
work_keys_str_mv | AT maascchm performancemetricsformodelsdesignedtopredicttreatmenteffect AT kentdm performancemetricsformodelsdesignedtopredicttreatmenteffect AT hughesmc performancemetricsformodelsdesignedtopredicttreatmenteffect AT dekkerr performancemetricsformodelsdesignedtopredicttreatmenteffect AT lingsmahf performancemetricsformodelsdesignedtopredicttreatmenteffect AT vanklaverend performancemetricsformodelsdesignedtopredicttreatmenteffect |