Cargando…
Dose escalations in phase I studies: Feasibility of interpreting blinded pharmacodynamic data
AIMS: During phase I study conduct, blinded data are reviewed to predict the safety of increasing the dose level. The aim of the present study was to describe the probability that effects are observed in blinded evaluations of data in a simulated phase I study design. METHODS: An application was cre...
Autores principales: | , , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
John Wiley and Sons Inc.
2022
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9805203/ https://www.ncbi.nlm.nih.gov/pubmed/35895751 http://dx.doi.org/10.1111/bcp.15473 |
Sumario: | AIMS: During phase I study conduct, blinded data are reviewed to predict the safety of increasing the dose level. The aim of the present study was to describe the probability that effects are observed in blinded evaluations of data in a simulated phase I study design. METHODS: An application was created to simulate blinded pharmacological response curves over time for 6 common safety/efficacy measurements in phase I studies for 1 or 2 cohorts (6 active, 2 placebo per cohort). Effect sizes between 0 and 3 between‐measurement standard deviations (SDs) were simulated. Each set of simulated graphs contained the individual response and mean ± SD over time. Reviewers (n = 34) reviewed a median of 100 simulated datasets and indicated whether an effect was present. RESULTS: Increasing effect sizes resulted in a higher chance of the effect being identified by the blinded reviewer. On average, 6% of effect sizes of 0.5 between‐measurement SD were correctly identified, increasing to 72% in 3.0 between‐measurement SD effect sizes. In contrast, on average 92–95% of simulations with no effect were correctly identified, with little effect of between‐measurement variability in single cohort simulations. Adding a dataset of a second cohort at half the simulated dose did not appear to improve the interpretation. CONCLUSION: Our analysis showed that effect sizes <2× the between‐measurement SD of the investigated outcome frequently go unnoticed by blinded reviewers, indicating that the weight given to these blinded analyses in current phase I practice is inappropriate and should be re‐evaluated. |
---|