Cargando…
Point estimation for adaptive trial designs I: A methodological review
Recent FDA guidance on adaptive clinical trial designs defines bias as “a systematic tendency for the estimate of treatment effect to deviate from its true value,” and states that it is desirable to obtain and report estimates of treatment effects that reduce or remove this bias. The conventional en...
Autores principales: | , , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
John Wiley & Sons, Inc.
2022
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7613995/ https://www.ncbi.nlm.nih.gov/pubmed/36451173 http://dx.doi.org/10.1002/sim.9605 |
Sumario: | Recent FDA guidance on adaptive clinical trial designs defines bias as “a systematic tendency for the estimate of treatment effect to deviate from its true value,” and states that it is desirable to obtain and report estimates of treatment effects that reduce or remove this bias. The conventional end‐of‐trial point estimates of the treatment effects are prone to bias in many adaptive designs, because they do not take into account the potential and realized trial adaptations. While much of the methodological developments on adaptive designs have tended to focus on control of type I error rates and power considerations, in contrast the question of biased estimation has received relatively less attention. This article is the first in a two‐part series that studies the issue of potential bias in point estimation for adaptive trials. Part I provides a comprehensive review of the methods to remove or reduce the potential bias in point estimation of treatment effects for adaptive designs, while part II illustrates how to implement these in practice and proposes a set of guidelines for trial statisticians. The methods reviewed in this article can be broadly classified into unbiased and bias‐reduced estimation, and we also provide a classification of estimators by the type of adaptive design. We compare the proposed methods, highlight available software and code, and discuss potential methodological gaps in the literature. |
---|