Optimal phase II/III drug development planning for programs with multiple normally distributed endpoints
Source:R/optimal_multiple_normal.R
optimal_multiple_normal.Rd
The function optimal_multiple_normal
of the drugdevelopR
package enables planning of phase II/III drug development programs with
optimal sample size allocation and go/no-go decision rules for two-arm
trials with two normally distributed endpoints and one control group
(Preussler et. al, 2019).
Usage
optimal_multiple_normal(
Delta1,
Delta2,
in1,
in2,
sigma1,
sigma2,
n2min,
n2max,
stepn2,
kappamin,
kappamax,
stepkappa,
alpha,
beta,
c2,
c3,
c02,
c03,
K = Inf,
N = Inf,
S = -Inf,
steps1 = 0,
stepm1 = 0.5,
stepl1 = 0.8,
b1,
b2,
b3,
rho,
fixed,
relaxed = FALSE,
num_cl = 1
)
Arguments
- Delta1
assumed true treatment effect for endpoint 1 measured as the difference in means
- Delta2
assumed true treatment effect for endpoint 2 measured as the difference in means
- in1
amount of information for Delta1 in terms of number of events
- in2
amount of information for Delta2 in terms of number of events
- sigma1
variance of endpoint 1
- sigma2
variance of endpoint 2
- n2min
minimal total sample size in phase II, must be divisible by 3
- n2max
maximal total sample size in phase II, must be divisible by 3
- stepn2
stepsize for the optimization over n2, must be divisible by 3
- kappamin
minimal threshold value kappa for the go/no-go decision rule
- kappamax
maximal threshold value kappa for the go/no-go decision rule
- stepkappa
step size for the optimization over the threshold value kappa
- alpha
one-sided significance level/family-wise error rate
- beta
type-II error rate for any pair, i.e.
1 - beta
is the (any-pair) power for calculation of the sample size for phase III- c2
variable per-patient cost for phase II in 10^5 $
- c3
variable per-patient cost for phase III in 10^5 $
- c02
fixed cost for phase II in 10^5 $
- c03
fixed cost for phase III in 10^5 $
- K
constraint on the costs of the program, default: Inf, e.g. no constraint
- N
constraint on the total expected sample size of the program, default: Inf, e.g. no constraint
- S
constraint on the expected probability of a successful program, default: -Inf, e.g. no constraint
- steps1
lower boundary for effect size category "small", default: 0
- stepm1
lower boundary for effect size category "medium" = upper boundary for effect size category "small" default: 0.5
- stepl1
lower boundary for effect size category "large" = upper boundary for effect size category "medium", default: 0.8
- b1
expected gain for effect size category "small" in 10^5 $
- b2
expected gain for effect size category "medium" in 10^5 $
- b3
expected gain for effect size category "large" in 10^5 $
- rho
correlation between the two endpoints
- fixed
assumed fixed treatment effect
- relaxed
relaxed or strict decision rule
- num_cl
number of clusters used for parallel computing, default: 1
Value
The output of the function is a data.frame
object containing the optimization results:
- Kappa
optimal threshold value for the decision rule to go to phase III
- n2
total sample size for phase II; rounded to the next even natural number
- n3
total sample size for phase III; rounded to the next even natural number
- n
total sample size in the program; n = n2 + n3
- K
maximal costs of the program (i.e. the cost constraint, if it is set or the sum K2+K3 if no cost constraint is set)
- pgo
probability to go to phase III
- sProg
probability of a successful program
- sProg1
probability of a successful program with "small" treatment effect in phase III
- sProg2
probability of a successful program with "medium" treatment effect in phase III
- sProg3
probability of a successful program with "large" treatment effect in phase III
- K2
expected costs for phase II
- K3
expected costs for phase III
and further input parameters. Taking cat(comment())
of the
data frame lists the used optimization sequences, start and
finish time of the optimization procedure. The attribute
attr(,"trace")
returns the utility values of all parameter
combinations visited during optimization.
Details
For this setting, the drug development program is defined to be successful if it proceeds from phase II to phase III and all endpoints show a statistically significant treatment effect in phase III. For example, this situation is found in Alzheimer’s disease trials, where a drug should show significant results in improving cognition (cognitive endpoint) as well as in improving activities of daily living (functional endpoint).
The effect size categories small, medium and large are applied to both endpoints. In order to define an overall effect size from the two individual effect sizes, the function implements two different combination rules:
A strict rule (
relaxed = FALSE
) assigning a large overall effect in case both endpoints show an effect of large size, a small overall effect in case that at least one of the endpoints shows a small effect, and a medium overall effect otherwise, andA relaxed rule (
relaxed = TRUE
) assigning a large overall effect if at least one of the endpoints shows a large effect, a small effect if both endpoints show a small effect, and a medium overall effect otherwise.
Fast computing is enabled by parallel programming.
Monte Carlo simulations are applied for calculating utility, event count and other operating characteristics in this setting. Hence, the results are affected by random uncertainty.
References
Meinhard Kieser, Marietta Kirchner, Eva Dölger, Heiko Götte (2018). Optimal planning of phase II/III programs for clinical trials with multiple endpoints
IQWiG (2016). Allgemeine Methoden. Version 5.0, 10.07.2016, Technical Report. Available at https://www.iqwig.de/ueber-uns/methoden/methodenpapier/, assessed last 15.05.19.
Examples
# Activate progress bar (optional)
if (FALSE) progressr::handlers(global = TRUE) # \dontrun{}
# Optimize
# \donttest{
set.seed(123) # This function relies on Monte Carlo integration
optimal_multiple_normal(Delta1 = 0.75,
Delta2 = 0.80, in1=300, in2=600, # define assumed true HRs
sigma1 = 8, sigma2= 12, # variances for both endpoints
n2min = 30, n2max = 90, stepn2 = 10, # define optimization set for n2
kappamin = 0.05, kappamax = 0.2, stepkappa = 0.05, # define optimization set for HRgo
alpha = 0.025, beta = 0.1, # planning parameters
c2 = 0.75, c3 = 1, c02 = 100, c03 = 150, # fixed/variable costs: phase II/III
K = Inf, N = Inf, S = -Inf, # set constraints
steps1 = 0, # define lower boundary for "small"
stepm1 = 0.5, # "medium"
stepl1 = 0.8, # and "large" effect size categories
b1 = 1000, b2 = 2000, b3 = 3000, # define expected benefit
rho = 0.5, relaxed = TRUE, # strict or relaxed rule
fixed = TRUE, # treatment effect
num_cl = 1) # parallelized computing
#> Optimization result:
#> Utility: -126.46
#> Sample size:
#> phase II: 30, phase III: 29, total: 59
#> Probability to go to phase III: 0.37
#> Total cost:
#> phase II: 122, phase III: 84, cost constraint: Inf
#> Fixed cost:
#> phase II: 100, phase III: 150
#> Variable cost per patient:
#> phase II: 0.75, phase III: 1
#> Effect size categories (expected gains):
#> small: 0 (1000), medium: 0.5 (2000), large: 0.8 (3000)
#> Success probability: 0.08
#> Joint probability of success and observed effect of size ... in phase III:
#> small: 0.08, medium: 0, large: 0
#> Significance level: 0.025
#> Targeted power: 0.9
#> Decision rule threshold: 0.05 [Kappa]
#> Assumed true effects [Delta]:
#> endpoint 1: 0.75, endpoint 2: 0.8
#> Correlation between endpoints: 0.5
# }