DOI: 10.1111/j.1541-0420.2007.00769.x
Biometrics
Sample Size Determination for Hierarchical Longitudinal Designs
with Diﬀerential Attrition Rates
Anindya Roy,1,2 Dulal K. Bhaumik,1,3,4 Subhash Aryal,1,3,4 and Robert D. Gibbons1,3,4,∗
1
Center for Health Statistics, University of Illinois at Chicago, 1601 W. Taylor St., Chicago,
Illinois 60612, U.S.A.
2
Department of Mathematics and Statistics, University of Maryland, Baltimore County, Baltimore,
Maryland 21250, U.S.A.
3
Department of Psychiatry, University of Illinois at Chicago, 1601 W. Taylor St., Chicago,
Illinois 60612, U.S.A.
4
Division of Epidemiology and Biostatistics, University of Illinois at Chicago,
1601 W. Taylor St. Chicago, Illinois 60612, U.S.A.
∗
email: [email protected]
Summary. We consider the problem of sample size determination for three-level mixed-eﬀects linear regression models for the analysis of clustered longitudinal data. Three-level designs are used in many areas,
but in particular, multicenter randomized longitudinal clinical trials in medical or health-related research. In
this case, level 1 represents measurement occasion, level 2 represents subject, and level 3 represents center.
The model we consider involves random eﬀects of the time trends at both the subject level and the center
level. In the most common case, we have two random eﬀects (constant and a single trend), at both subject
and center levels. The approach presented here is general with respect to sampling proportions, number of
groups, and attrition rates over time. In addition, we also develop a cost model, as an aid in selecting the
most parsimonious of several possible competing models (i.e., diﬀerent combinations of centers, subjects
within centers, and measurement occasions). We derive sample size requirements (i.e., power characteristics) for a test of treatment-by-time interaction(s) for designs based on either subject-level or cluster-level
randomization. The general methodology is illustrated using two characteristic examples.
Key words: Cost analysis; Dropouts; Mixed eﬀects; Power analysis; Proﬁle analysis; Three-level nested
design.
1. Introduction
In the medical and health sciences, multicenter randomized
clinical trials have become the design of choice for large-scale
studies of the eﬀects of medical interventions on health outcomes. In such trials a primary focus of the analysis is testing
the single degree of freedom treatment by linear time interaction. In this design, treatment and treatment-by-time interactions are treated as ﬁxed eﬀects in the statistical model,
whereas the intercept and slope of the time trends are considered to be random at both the center level and the subject
level.
When designing a longitudinal multicenter clinical trial,
there are several fundamental questions that must be answered: (i) What are the minimum number of subjects (n),
centers (c), and time points (T) that are required in order to
attain a prespeciﬁed power level for testing signiﬁcance of the
treatment-by-time interaction? (ii) If the number of centers
falls below the minimum number, can one still achieve the
desired level of statistical power (e.g., 80%) by increasing the
number of subjects enrolled in each center? (iii) How should
the sample size be adjusted if dropouts are expected? (iv) If
C
2007, The International Biometric Society
there is a budget constraint, what is the optimum allocation
of c, n, and T which achieves the desired power level?
At present there are no deﬁnitive guidelines that can simultaneously answer all of these questions for three-level nested
designs. Questions (i) and (iii) can be implicitly solved using
a computer-intensive method which iteratively computes the
power and solves for the sample size. One can use the form of
the noncentrality parameter of the test for speciﬁc models; see
Liu and Liang (1997), Verbeke and Molenberghs (2000), and
Diggle et al. (2002). For question (ii), there is no clear answer
and the ﬁndings of this article will show that the actual answer
disproves the general perception in the practice of multicenter
clinical trials. Such ﬁndings are in the same vein with those
in the group randomization trial literature (Donner, 1992;
Donner and Klar, 2000) where the trade-oﬀ between cluster
size and number of clusters is investigated using results by
Kish and Frankel (1974) and others in the context of multistage sampling. The cost analysis question is addressed by
many investigators (e.g., Bloch [1986], Lui and Cumberland
[1992], and Raudenbush and Liu [2000]) for some simpler designs and statistical models than the ones considered here.
1
2
Biometrics
Sample size determination is an important problem as insufﬁcient sample size can lead to inadequate sensitivity delaying
important discoveries, whereas an excessive sample size can
be a waste of resources and may have ethical implications.
The objective of this article is to explicate the power characteristics of the two general types of three-level studies discussed above (i.e., subject randomization versus cluster randomization) while allowing for dropouts. We answer the above
questions under the umbrella of a general three-way nested
model which considers S treatments comprised of K factors
(S ≥ K), with potentially diﬀerent sampling proportions. In
terms of the time trends, we consider up to Q components,
which may include up to a Qth degree polynomial (Q < T )
or any possible set of contrasts, including those used in proﬁle analyses (Fitzmaurice, Laird, and Ware, 2004). The literature of determining sample size for repeated measures is
extensive for both univariate and multivariate linear models. Muller and Barton (1989) and Muller et al. (1992) discuss statistical power computation for longitudinal studies.
Overall and Doyle (1994), Kirby, Galai, and Munoz (1994),
Ahn, Overall, and Tonidandel (2001) among others provide
formula for sample size estimation for repeated measures
models involving two groups. Snijders and Bosker (1999) give
approximate power calculations for testing ﬁxed eﬀects in
mixed models involving repeated observations. Researchers
have attempted to model the eﬀect of dropouts on power
computation (Diggle et al., 2002). Verbeke and Molenberghs
(2000) used a dropout process model for deriving the noncentrality parameter of an F-test that is used for testing
ﬁxed eﬀects in linear mixed models. Hedeker, Gibbons, and
Waternaux (1999) derived sample size formula for two-level
nested designs with a constant attrition rate. However, no
results for sample size determination are available for threelevel designs, speciﬁcally when there is attrition. Several
researchers have investigated the problem of determining sample size for maximizing power under budget constraints. Bloch
(1986) and Lui and Cumberland (1992) determine the sample
size and the number of time points for a speciﬁc power level
in a repeated measurement study in order to minimize suitable cost functions. Raudenbush and Liu (2000) determine
the number of centers and number of subjects per center in
a multicenter randomized clinical trial by incorporating costs
at each level of the design.
The article is organized as follows. In Section 2, we introduce the model and discuss the relevant hypotheses and test
statistics. We derive the sample size formula for subject-level
randomization (Theorem 1) and center-level randomization
(Theorem 2) using diﬀerential attrition rates. Some special
cases like no attrition and constant attrition are also discussed
in this section. We illustrate our results using two motivating
examples in Section 3 and provide numerical results. Section 4
provides a cost analysis in order to minimize the total cost of
the study under the constraint of a prespeciﬁed power level.
We summarize our ﬁndings in Section 5. Proofs of the main
results (Theorems 1 and 2) are relegated to a Web Appendix
which can be found in the supplementary materials posted on
the Biometrics website.
2. Models and Results
Suppose there are c centers, and n subjects are nested within
each center. The total nc subjects are divided into S treatment
groups. Suppose the treatment index for the sth treatment
group is a 1 × (K + 1) vector whose ﬁrst element is one, representing a baseline treatment factor and the other K elements
are the levels of K factors (e.g., a treatment could be a cocktail drug comprised of diﬀerent dose levels of K main drugs)
contributing to the treatment. Each subject is assigned to a
particular treatment. The treatments can be assigned in two
diﬀerent ways. In the ﬁrst assignment scheme each subject in
each center is randomized into one of S treatment groups and
suppose the allocation proportions of the treatment groups
are π = (π 1 , . . . , π S ). Thus, nπ s subjects are assigned to the
sth treatment group. We will assume that the allocation proportion vectors remain the same across centers. In the literature, this assignment is known as subject-level randomization
where the subjects are nested within the treatment groups
which in turn are nested in the centers. The other option is
to randomly assign the c centers to the S treatments with
allocation proportion vector π. Thus, cπ s number of centers
will receive the sth treatment. In that case, all subjects in
a given center will receive the treatment that has been assigned to the center. This is known as center-level randomization where the subjects are nested in centers which in turn
are nested in the treatment groups. It is understood that the
quantities nπ s and the cπ s in the above discussion are all
integers.
Let us assume that a longitudinal study has been conducted
at T diﬀerent time points. In this article, we incorporate the
possibility that subjects may dropout from the experiment at
any time point. The dropout process can be either modeled
as a stochastic process (Verbeke and Molenberghs, 2000) or
empirically estimated. However, at the design stage it is unlikely that a good stochastic model for the dropouts is available. We let the dropouts be prespeciﬁed by the experimenter.
For the development of our model we assume a monotone
dropout rate. We permit diﬀerent dropout rates at diﬀerent
time points and across diﬀerent treatment groups. Speciﬁcally,
let ξ s,t denote the fraction of subjects in the sth treatment
group with responses at only the ﬁrst t time points. We will
call the vector of such fractions for a particular treatment
group, ξ s = (ξ s,1 , . . . , ξ s,T ) as the attrition vector of that
treatment group. Our ultimate objective is to determine the
sample size for the test of interaction between the treatment
and time. Unless some subjects stay beyond the ﬁrst time
point, testing of treatment-by-time interaction is not a feasible proposition. Thus, we will assume that ξ s,1 < 1 for all s. To
capture the evolution of the treatment responses over time,
we consider a time trend matrix TR of order T × (Q + 1).
We will take the ﬁrst column of the TR matrix to be one,
to capture a baseline eﬀect. The remaining Q columns each
represent a particular functional eﬀect of time. For example, if we are considering only a linear trend, then Q is one
and the second column of TR will be (1, . . . , T ) . We will assume the columns of TR to be linearly independent. The basic
time variable t is assumed to be equally spaced but because
we are considering very general functions of t, the spacings
can be adjusted by changing the trend functions. Because
subject-level randomization and center-level randomization
give rise to two distinct designs, we will consider these two
cases separately. The development of the methodology relies heavily on matrix notation. The notation is explained in
Table 1.
Sample Size Determination for Hierarchical Longitudinal Designs
3
Table 1
Glossary of notation
Notation
Meaning
c, n, T
K, S, Q
Σγ , Σ δ , Σ τ∗
κ
TR ; TRt
f22
T
us
1
r ; Ir
r
B
i=1 i
{c xi }ri=1
A⊗B
A11 · · A1r
B11 · · B1r
•
· · · · ··
· · · · ··
⊗
Am1 · · Amr
Bm1 · · Bmr
U ; Vs ; Us
# of centers, subjects, measurement occasions
# of treatments, treatment factors, time trends
center, subject, error variance component
alternative value for single degree of freedom test
(zα + zβ )2 /τ 2∗ where zα are N(0,1) percentiles
time matrix; ﬁrst t rows of TR
−1
(2,2)th element of (TR Σ−1
TR )
(TR1 , TR2 , . . . , TRT )
row vectors of treatment indices
r × 1 vector of ones; r × r identity matrix
blockdiagonal matrix with blocks B1 , B2 , . . . , Br
stacking column vectors x1 , x2 , . . . , xr vertically
Kronecker product of the matrices A and B
A11 ⊗ B11 · · · A1r ⊗ B1r
···
···
···
Am1 ⊗ Bm1 · · · Amr ⊗ Bmr
(u1 : u2 : · · · : uS ) ; {{c 1nξs,t }Tt=1 }S
s=1 ; us ⊗ Vs
ζ s ; ζ; Φ; Xc
{c 1nπs ξs,t }Tt=1 ; {c ζs }S
s=1 ; ζ ⊗ (1S ⊗ T ); (U ⊗ Φ)
Λ
Σ,tt
Gt ; Rs
˜
Ω
[ s=1 a=1 Inπs ξs,a ⊗ TRa ]
upper left t × t block of Σ
T
Σ,tt + TRt Σδ TRt
;
ξ T G−1 TRt
t=1 s,t Rt t
•
S T
S
(U ⊗ IQ+1 ) (
˜n
Ω
s=1
S
(U ⊗ IQ+1 ) (
2.1 Subject-Level Randomization
Let y denote the vector of observations and let θ, γ, and δ
denote the vector of ﬁxed eﬀects, the vector of center-level
random eﬀects and the vector of subject-level random eﬀects,
respectively. The coeﬃcient matrices are X = 1c ⊗ Xc for the
ﬁxed eﬀects, Z γ = Ic ⊗ Φ for center-level random eﬀects, and
Z δ = Ic ⊗ Λ for the subject-level random eﬀects, where Xc ,
Φ, Λ, Ic and TRa are deﬁned in Table 1. Then a mixed-model
for the responses can be written as
y = Xθ + Zγ γ + Zδ δ + .
(1)
where is the vector of model errors. The problem of interest
is to test a linear hypothesis about the treatment-by-time
interaction which reduces to a general linear hypothesis about
the ﬁxed-eﬀect parameters:
H0 : Lθ = 0
•
vs.
H1 : Lθ = 0,
(2)
where rows of L describe d linearly independent linear hypotheses about the ﬁxed-eﬀect parameters θ. Note that some
frequently used hypotheses like treatment-by-time interaction, proﬁle analysis based on the end time point are all included in (2). When d = 1 the concept of one-sided tests is
applicable. To determine a critical region for the hypothesis
(2) we need to make distributional assumptions about the
random components of the model. Let
(γ0,i , . . . , γQ,i ) ∼ NQ+1 (0, Σγ ),
(δisj0 , . . . , δisjQ ) ∼ NQ+1 (0, Σδ ).
(3)
For any a = 1, 2, . . . , T , let the errors (
isaj 1 , . . . , isaja ) ∼ Na (0,
Σ,aa ), where Σ,aa is deﬁned in Table 1. We will assume that
s=1
πs Rs )(U ⊗ IQ+1 )
πs [n−1 Rs−1 + Σγ ]−1 )(U ⊗ IQ+1 )
the random eﬀects are independent of each other and the variance components Σγ , Σδ , and Σ are speciﬁed. Under these
assumptions, the distribution of the responses is y ∼ N (Xθ,
Σ), where Σ can be explicitly deﬁned.
Let us now deﬁne the appropriate critical region for testing
(2), based on the assumption that Σ is completely known. The
ˆ [LΩL ]−1 (Lθ),
ˆ
appropriate test statistic is the pivot F = (Lθ)
where θˆ = (X Σ−1 X)−1 X Σ−1 y is the generalized least squares
(GLS) estimator of θ, Ω = (X Σ−1 X)−1 is the covariance
ˆ Let χ2 be the upper α percentile of the χ2 matrix of θ.
d,α
distribution with d degrees of freedom. The most powerful
critical region at level α for testing (2) is
C = y : F > χ2d,α .
(4)
In practice, an appropriate test statistic for the hypothesis
(2) will be a Wald-type pivotal statistic based on a feasible
version of the GLS estimator (FGLS) of θ where the variance components are replaced by their estimators. Kenward
and Roger (1997) suggested a scaled F distribution as a ﬁnite
sample approximation to the null distribution of an FGLS
test. The scale and the degrees of freedom of the approximate
F involve estimated parameters and also involve n, c, and
T in a highly nonlinear way. The distribution of the FGLS
test under the alternative hypothesis is not known. Helms
(1992), and Verbeke and Molenberghs (2000) provide approximate degrees of freedom and the expression for noncentrality
parameters for the approximate F test (including dropouts
rates that change over time but not over treatment groups)
under the alternative hypothesis. This methodology cannot
properly determine the minimum number of centers needed
in the study and hence is not likely to yield the optimum cost
Biometrics
4
solution of (c, n, T ). This is the basis of choosing the test (4)
for this study. Even if we consider the feasible version of the
test, our results continue to hold approximately if we include
only the noncentrality parameter into the power computation
and not the estimated degrees of freedom. We acknowledge
that in practice only feasible versions of the GLS test can be
used. In the supplementary materials section for this article
which is posted on the Biometrics website, we report the results of a simulation study investigating the performance of
the power function for a sample whose size has been determined using the test (4), via a limited simulation study for the
setup of the ﬁrst example given in the examples section. When
the number of independent units (e.g., centers in center-level
randomization) is small relative to the total sample size, the
χ2 test may have problems maintaining the nominal level due
to the dependence among the observations. The problem of
interest is to test signiﬁcance of the treatment-by-time interactions. Thus, each row of the hypothesis matrix L is of
the form lU ⊗ lT where lU is generally a treatment contrast
and lT is a time contrast. In most situations we are not interested in the baseline time eﬀect and the ﬁxed parameters.
Hence the ﬁxed parameters θ1 , . . . , θQ+1 , do not enter into the
hypothesis. This amounts to a treatment contrast lU whose
ﬁrst element is zero. This assumption about the rows of L
amounts to the constraint Le1 = 0, where e1 is the vector
with one at the ﬁrst place and zero everywhere else. This additional constraint will lead to a certain simpliﬁcation of our
results without compromising much generality in practice. Let
G(d, α, β) be the noncentrality parameter of a noncentral χ2
distribution with d degrees of freedom, whose lower β percentile is the upper α percentile of a central χ2 distribution
with d degrees of freedom.
Theorem 1. Suppose there are n subjects divided into S
treatment groups, in each of the c centers. Suppose the treatment allocation proportions in each center are given by the allocation vector π. Let the attrition vector in the sth treatment
group in each center be ξ s . Then to attain power of at least
(1 − β) for the test (4) at an alternative value of θ at θ∗ , a
lower bound for the required number of subjects per center is
given by
˜ −1 L ]−1 Lθ∗ ,
n ≥ G(d, α, β)/ cθ∗ L [LΩ
(5)
˜ is deﬁned in Table 1.
where Ω
The sample size formula (5) does not involve parameters
of the covariance matrix Σγ of the center-level random components. This is true when our inference is regarding the
treatment-by-time interaction parameters and not the baseline treatment parameters alone. The formula (5) can be writ˜ −1 L ]−1 Lθ∗ ). Thus, the forten as c ≥ G(d, α, β)/(nθ∗ L [LΩ
mula can be also used to determine the number of centers
to be included in the study when the number of subjects in
each center, n, is speciﬁed. Hence, the roles of c and n are
interchangeable when randomization is done at the subject
level. More generally the particular allocation of n and c is not
important, as long as their product attains the lower bound
˜ −1 L ]−1 Lθ∗ ). Under certain simpler error
G(d, α, β)/(θ∗ L [LΩ
˜ can
covariance structures (e.g., Σ = σ 2 IT ), the elements of Ω
be written as a function of T and in those cases the formula
(5) can also be used to determine the number of time points
T required to attain a prespeciﬁed power level when n and c
are ﬁxed.
2.1.1 Sample size determination for single degree of freedom
test with no attrition. A scenario that is often encountered
in practice is when a treatment is compared with a placebo
or a control and there is a single time trend. In this case
the treatment matrix U is ( 11 01 ) and a typical element of the
second column of the time matrix TR is g(t), the trend at
time t. Let yisajt denote the response at the tth time point
of the jth subject from the ath attrition group nested in the
sth treatment group of the ith center. Then a model for the
response yisajt is,
yisajt = η0 + η1 g(t) + τ0 xijk + τ1 xijk g(t) + γi0 + γi1 g(t)
+ δisj0 + δisj1 g(t) + isajt ,
(6)
where γ’s are the center-level random components and δ’s are
those at the subject-level. The treatment indicator xijk = 1
if the subject was assigned to test condition and equal to 0
otherwise. The problem of interest is to test whether there is
any treatment-by-time interaction, i.e.,
H0 : τ 1 = 0
vs.
H1 : τ1 > 0.
(7)
Let π 1 and π 2 be the sampling proportions and let there be
no attrition. Let the alternative value be τ ∗ = Lθ∗ and let κ =
(z α + z β )2 /τ 2∗ where z α and z β are the standard normal percentiles. Let f22 and σ δ,22 denote the (2,2)th element of the
−1
matrices (TR Σ−1
and Σδ respectively.
TR )
Corollary 1. The sample size formula for the single degree
of freedom test is
n ≥
(f22 + σδ,22 )κ
.
π1 π2 c
(8)
In a single degree of freedom test, for a one-sided alternative, the value (z α + z β )2 matches with G(1, 2α, β) of
the general formula. The U matrix for the above scenario is
nonsingular. The sample size formula in (5) can be simpliﬁed further whenever U is nonsingular, even in presence of
diﬀerential attrition rates. The sample size formula in (8) depends on the subject-level covariance matrix only through the
variance of the subject-level random slope.
2.2 Center-Level Randomization
Next we consider the case in which the centers are randomly
assigned to the treatments and all subjects in a given center
receive the treatment assigned to that center. The centers are
randomly assigned to S treatments according to allocation
proportions π, i.e., all the subjects in π s c centers will receive
treatment s. Suppose the subjects in a center receiving the
sth treatment drop out according to the attrition vector ξ s .
Because the observations across the centers are independent,
it is enough to deﬁne the model center by center. The ﬁxedeﬀect matrix for a center receiving treatment s is given by
•
Xs = Us ⊗ T , where Us and T are deﬁned in Table 1. Then
the sample size formula associated with the hypothesis (2) for
a speciﬁc alternative is given by the following theorem.
Theorem 2. Suppose there are c centers divided into S
treatment groups according to the allocation vector π. Let each
Sample Size Determination for Hierarchical Longitudinal Designs
center have n subjects and let the dropout rates in the centers
receiving the sth treatment be ξ s . Then to attain a power of at
least (1 − β) for the test (4) at an alternative value θ∗ , a lower
bound for the required number of subjects per center is given by
n ≥ min{i : f (i) ≥ G(d, α, β)/c},
(9)
where f(n) is a strictly increasing function of n deﬁned by
˜ −1 L ]−1 Lθ∗ and Ω
˜ n is deﬁned in Table 1. The
f (n) = θ∗ L [LΩ
n
formula (9) has a feasible solution for all values of c such that
c ≥ G(d, α, β)
˜ −1 L
θ∗ L LΩ
∞
−1
Lθ∗ ,
(10)
˜ ∞ = (U Δπ U ) ⊗ Σ−1 , and Δπ is a diagonal matrix with
where Ω
γ
diagonal elements π s .
The condition (10) puts a restriction on the minimum number of centers. Formula (10) shows that if the number of centers fall below a critical level, you can never compensate the
shortage in number of centers by increasing the number of
subjects per center. Such trade-oﬀs between the cluster size
and number of clusters can be also found in the group randomization trial literature and are analyzed via the intracluster correlation; see Donner and Klar (2000) and the references
therein. The results have roots in the pioneering work by Kish
and Frankel (1974) in the context of multi-stage sampling. To
determine the values of n and c using the approximate F distribution under the alternative an iterative method is needed.
The current methodology provides a lower bound for values
of c in the iterations. Note that at the lower bound for c, n
can be inordinately large, and so somewhat larger values of c
should be considered in practice.
2.2.1 Sample size determination for single degree of freedom test with no attrition. Consider the scenario of Subsection
2.1.1 for center-level randomization.
Corollary 2. Let τ ∗ = Lθ∗ be the alternative value. Then
the formula (9) reduces to
n≥
(f22 + σδ,22 )κ
,
π1 π2 c − κσγ,22
(11)
where σ γ,2,2 is the (2, 2)th element of Σγ . A feasible solution to (11) exists provided the number of centers satisﬁes c ≥
−1
π −1
1 π 2 κσ γ,22 .
The expression in (11) depends on the elements of Σγ only
through σ γ,22 , the variance of the center-level random slope.
Once a center-level random slope is present, to have a consistent test of the treatment-by-time interaction, one needs
to accurately estimate the variance of the center-level slope.
This entails treating the centers as a sample from a larger
population of centers and thus to draw valid inference on a
center-level component one needs a suﬃciently large sample
of centers. Accurate estimation of the center-level slope variance is not possible if the number of centers is inadequate.
This disproves the myth that if you have few centers you can
still achieve adequate power by increasing the number of subjects per center. For a model with only a random intercept
at the center level, the denominator of the right-hand side of
(11) does not depend on either elements of Σγ or n. In that
case, given a prespeciﬁed number of centers and an alternative
5
value τ ∗ one can always ﬁnd the required number of subjects
per center needed to attain any prespeciﬁed power level.
To determine the number of centers for a prespeciﬁed value
−1
of n, the formula (11) can be written as c ≥ π −1
1 π 2 [κσ γ,qq +
(f 22 + σ δ,22 )κ/n].
3. Examples
To help ﬁx ideas about the three-level nested model using real
world studies, consider the following two examples.
Example 1. Testing for treatment trends in mental health
schizophrenia data.
The following data were collected as part of the National Institute of Mental Health schizophrenia collaborative study on
treatment-related changes in overall severity using the Inpatient Multidimensional Psychiatric Scale (IMPS) (Lorr and
Klett, 1966). Nine centers participated in this study, and
within each center, subjects were randomly assigned to a
placebo condition or one of three drug conditions (chlorpromazine, ﬂuphenazine, or thioridazine). Hedeker and Gibbons
(2006) analyzed data only from subjects assigned to either
the placebo or the test condition. In this study there were
seven time points, hence T = 7; however, for the purpose
of illustration we use T = 5 time points. An appropriate
model for the response is (6) and the hypothesis of interest
is (7), signiﬁcance of interaction of test condition and time.
To linearize the time versus response
√ function, Hedeker and
Gibbons (2006) considered g(t) = t − 1. The estimates of
the parameters obtained by Hedeker and Gibbons (2006) will
be used as the parameter values. Hedeker and Gibbons (2006)
assumed the center-level slope variance σ γ,11 to be zero and
also the error covariance matrix as Σ = σ 2 IT . The estimates
are σ δ,00 = 0.285, σ δ,11 = 0.225, σ δ,01 = 0.025, σ γ,00 = 0.039,
and σ 2 = 0.570. We will address the question of sample size
determination when σ γ,11 = 0.1368, and σ γ,01 = 0. This is a
crude method of moments estimator of the center-level slope
variance obtained from the data. In the actual trial, subjects
were randomized to treatments within centers; however, we
can use these data, and estimates of the variance components
to derive sample sizes for both subject-level and center-level
randomizations. Treating the above estimates as the parameter values for model (6) we determine the sample size required
to attain a power of at least 80% for the test (4) with α =
0.05.
Case I. Subject-level randomization with no attrition.
The appropriate formula for the sample size calculation in
this case is (8). Because we are assuming only a single trend
(i.e., Q = 1) and Σ = 0.57I, we have f 22 = σ 2 σ −2
gT where
2
σgT
=
T
(g(t) − g¯T )2 is the variance of the trend column
t=1
−1
T
and g¯T = T
g(t) is the mean of the trend column. In
t=1
this example, π 1 = π 2 = 0.5 and the number of centers is c =
9. The value of κ for α = 0.05 and β = 0.2 is κ = 6.1826/τ 2∗ .
2
The variance of an observation at time point t is σy,t
=
√
0.5322 + 0.05 t − 1 + 0.3618 t. We can rewrite κ in terms of
an eﬀect size
√ at time t. Then the eﬀect size at time point t
is ESt = τ∗ t − 1/σy,t . In that case κ = 6.1826/(σ 2y,t ES 2t ).
Note that at t = 5, τ ∗ = 0.2343, κ5 = 112.94, σ 2g5 = 2.4452,
Biometrics
6
Figure 1.
Power curve as functions of τ ∗ for various values of n in Example 1 with subject-level randomization.
σ 2y,5 = 2.4412. In this formulation, the value of the eﬀect size
is ESt = 0.30. Hence the required number of subjects per center is n ≥ 4(6.1826)(0.233 + 0.225)/(9σ 2y,t ES 2t ) = 23, or 23
subjects in each of 9 centers.
In Figure 1 we see that power curves are monotonically
increasing function of the alternative value τ ∗ and they change
drastically when number of subjects increases from 2 to 42 in
an increment of 8.
Case II.
attrition.
Subject-level
randomization
with
constant
Suppose ξ A = ξ B = (.1 .1 .1 .1 .6) and c = 9. The formula
(8) becomes n ≥ r22 κ/(π A π B c), where r22 is the (2,2) element
of R−1
1 and R1 is deﬁned in Table 1. Assuming as in Case I
that t = 5, τ ∗ = 0.2343, the required number of subjects per
center for this case is a total of 29 subjects.
Case III. Center-level randomization with no attrition.
The minimum number of centers obtained from the for−1
2
mula (2) is c ≥ π −1
A π B κσ γ,11 = 4(6.1826)(0.1368)/τ ∗ =
3.3831/τ 2∗ , and the number of subjects with a value of c
more than 3.3831/τ 2∗ is n ≥ 4(f 22 + 0.225)(6.1826)/(cτ 2∗ −
4(6.1826)(0.1368)) = 25.5168/(cτ 2∗ − 3.3831). For t = 5 timepoints, there should be a minimum of 62 centers. For 62 centers, the required number of subjects per center is 552. Note
that this is a huge number of subjects. As the number of centers increases beyond the required minimum number, the total
number of subjects required decreases dramatically. For example, with 70 centers, only 26 subjects per center are required.
With 100 centers, only 6 subjects per center are required.
Compared to subject-level randomization (Case I), in which
we needed a total of 9 × 23 = 207 subjects, center-level randomization requires a much larger number of subjects, for ex-
ample, a high of 34,224 for 62 centers and 600 for 100 centers.
Based on these results, it would seem logical to always use
subject-level randomization; however, there are many cases
in which this is not possible. For example, most school-based
interventions require that randomization is at the level of the
school, because only one condition (i.e., experimental or control) can be implemented at a given school.
Case IV.
attrition.
Center-level
randomization
with
constant
Suppose ξ A = ξ B = (.1 .1 .1 .1 .6) . Assuming 100 centers,
7 subjects per center are required (as compared to 6 subjects
per center based on no attrition).
Example 2. Proﬁle analysis for treatment of lead-exposed
children (TLC) trial.
The TLC Trial Group was a placebo controlled double
blind randomized trial. The purpose of this clinical trial was
to compare the eﬀect of lead chelation with succimer to
placebo therapy. Data were collected at sites in Cincinnati,
Ohio; Philadelphia, Pennsylvania; Baltimore, Maryland; and
Newark, New Jersey. Complete data description is available at
http://dir.niehs.nih.gov/direb/studies/tlc/home.htm.
In the TLC trial, one is interested in comparing the two
treatment groups (succimer or placebo) in terms of their patterns of change from baseline in the mean blood lead levels.
This question is equivalent to a proﬁle analysis (Fitzmaurice
et al., 2004). Model (6) can also be used for the TLC trial
data with the four centers, Baltimore, Cincinnati, Newark,
and Philadelphia. However, the null hypothesis for the proﬁle analysis would be diﬀerent from that of the single degree of freedom hypothesis of treatment-by-time interaction.
In proﬁle analysis, we ﬁrst construct the contrast between the
Sample Size Determination for Hierarchical Longitudinal Designs
treatment means for each time point and then test the equality of these contrasts. Mathematically, this concept can be
explained as follows: The treatment matrix is U = ( 11 −11 ) and
the time matrix is TR = (1T : CT ) where CT is a T × (T − 1)
matrix whose columns are time contrasts and are orthogonal
to the ﬁrst column of TR . In particular, the form of CT appropriate for comparing every time point with the baseline time
period is CT = (1T −1 : −I T −1 ). The objective of the proﬁle
analysis is to test the interaction between the treatment and
the time contrast LT . Suppose LU is the vector (1 0) and LT
is a (T − 1) × T time contrast matrix whose linearly independent rows are orthogonal to 1T . The df involved in this
test is Q = (T − 1) and let Σ = σ 2 IT . The required sample
size at an alternative value θ at θ∗ = (θ1∗ θ2∗ ) is
Subject level: n ≥ G(T − 1, α, β)/[θ2∗ LT (LT R −1 LT )−1 LT θ2∗ ].
Center level: n = min{i: θ2∗ LT [LT (R−1 /i + Σγ )LT ]−1 LT θ2∗ ≥
G(T − 1, α, β)/c},
where the number of centers satisﬁes c ≥ θ2∗ Σ−1
γ θ 2∗ .
4. Cost Analysis
An experimenter may increase the power of a three-level study
in several ways. The variables are T, n, and c. We have treated
(π 1 , . . . , π S ), as known constants in our results. In practice,
optimal determination of the proportions may be the prime
goal of the study. Thus, the variables for power/cost analysis
are c, n, π 1 , . . . , π S−1 , and T. However, there is a cost associated with each of the variables. Such trade-oﬀ analysis can
be also found in multi-stage sampling literature. The fundamental goal of the cost analysis is to minimize the total cost
of the study under the constraint on the variables obtained
Figure 2.
7
from the sample size formula. In this section, we ﬁrst formulate the cost analysis problem in terms of a general cost
function under the assumption that the sampling proportions
are given. Special attention is paid to a particular cost function that is similar to those considered in Bloch (1986) and
Lui and Cumberland (1992). The cost analysis problem for
this special cost function is solved in light of Example 1.
Let the cost function be denoted by Q(c, n, T ). Then
the cost analysis problem can be written as an optimization
problem minδ(θ∗ ,c,n,T )≥G(d,α,β) Q(c, n, T ), where (c, n, T ) ∈ Z3+
are numbers on the positive integer lattice, and δ(·, ·, ·, ·) is
the noncentrality parameter of the distribution of the test
statistic under the alternative θ = θ∗ . After doing some linear
algebra, and if the error covariance matrix Σ is of a simpler
form (e.g., independent, intra-class correlation, autoregressive
or more generally Toeplitz form) it can be shown that for
center-level randomization (the case of subject-level randomization has an easier form) the optimization problem reduces
to
min
c(y (n−1 I+Δ)−1 y)≥φ(T )
Q(c, n, T ),
(12)
where y ∈ Rd , Δ is a diagonal matrix and φ(T ) is a function of T. All quantities depend on the parameters of the
problem, the alternative value and the function G. The form
(12) can be solved using integer programming routines. Given
that most cost functions Q will be increasing in all arguments, the solution will lie on the boundary of the constraint space (integer boundary). As we are solving only in
a three-dimensional space, the problem (12) can be satisfactorily solved through repeated function evaluation. In simpler
Cost as a function of T when n and c are given at their optimal values.
Biometrics
8
problems, an approximate optimal solution can be found by
solving the corresponding continuous problem.
For example, let us consider the single degree of freedom
test situation discussed in Example 1 and let the cost structure be of the form
Q1 (c, n, T ) = a1 + a2 T + a3 c + a4 T c + a5 nc + a6 T nc,
where a1 is the initial cost, a2 is the cost involved with each
time point, a3 is the initial cost for enrolling each center for
the study, a4 is the incremental center cost at each time point,
a5 is the per subject enrollment cost, and a6 is the incremental cost for each subject at each time point. When there is
attrition, the total number of measurements is smaller than
Tnc and the coeﬃcient of a6 is potentially smaller than the
factor Tnc considered in the cost function. For more details
about cost functions we refer to Bloch (1986) and Lui and
Cumberland (1992). We will write f22 in Theorem 1 as f 22,g (T )
to indicate its dependence
on the trend function g(t) and T.
√
In Example 1, g(t) = t − 1. Then the optimization problem
can be written as minn(c−cmin )≥φ(T ) Q1 (c, n, T ), where φ(T ) =
4(f 22,g (T ) + σ δ,22 )κ and cmin = 0 for subject-level randomization and cmin = 4κσ γ,22 for center-level randomization. Let us
consider the case of subject level randomization. If we optimize over {c ≥ 1; n ≥ 1; T ≥ 2}, then the solution is nopt =
φ(T opt )/copt where copt and T opt are obtained by minimizing
[a1 + a2 T + a3 c + a4 cT + a5 φ(T ) + a6 T φ(T )] for T > 1 and
1 ≤ c ≤ φ(T ). For a given value of T, the above cost function
is linear in c and hence the solution occurs at c = 1. Thus,
the optimum number of time points, T opt is a solution to
min[(a1 + a3 ) + (a2 + a4 )T + (a5 + a6 T )φ(T )].
T >1
(13)
in computing power. While initially somewhat counterintuitive, further reﬂection reveals that because interest is in the
treatment-by-time interaction, and both treatment and time
are nested within centers, variability in intercepts and trend
coeﬃcients across centers plays no role in the power characteristic of the statistical test of the treatment-by-time interaction. This is not the case, however, for center-level randomization. Here, center-level trend coeﬃcients play a signiﬁcant role
in sample size determination, such that diﬀerent combinations
between centers and number of subjects within centers, can
lead to quite diﬀerent power characteristics, even if the total
number of subjects is identical. Furthermore, we note that
for center-level randomization, there is a minimum number of
centers, below which certain levels of statistical power (e.g.,
0.8) are unattainable, irrespective of the number of subjects
per center.
Taken as a whole, the ﬁndings of this study provide a more
rigorous statistical foundation for the design of multicenter
longitudinal randomized clinical trials. The results presented
here only apply to the case of linear mixed-eﬀects regression models. Future work in this area should examine statistical power characteristics of nonlinear two- and three-level
mixed-eﬀects models for binary, ordinal, nominal, and count
response data. Also, one should investigate similar results for
other tests used in longitudinal clinical trials and incorporate
the issue of controlling Type I error in the cost optimization
exercise.
6. Supplementary Materials
Web Appendices for proofs of Theorems 1 and 2 and Web
Tables for some simulation results on sensitivity analysis of
the proposed formula referenced in Sections 1 and 2, respectively, are available under the Paper Information link at the
Biometrics website http://www.tibs.org/biometrics.
We chose hypothetical values of the coeﬃcient vector
(a1 a2 a3 a4 a5 a6 ) as (1000 200 1000 100 10 1) and optimized
the cost in (13). The optimal solution lies at (c = 1, n =
207, T = 5). Thus, the total number of subjects and the number of time points matches with those in Example 1, case I;
however, due to the cost structure, the optimum cost solution
puts all subjects in one center instead of having nine centers
with 23 subjects in each center. Figure 2 veriﬁes that the cost
is minimized at T opt = 5 when n and c are held at their optimal values. The minimum cost at T = 5 is about $10,600
with the current relative cost structure.
The authors thank an associate editor and two referees for
their insightful comments that signiﬁcantly improved the
quality of this article. This work was supported in part by
a grant from the National Institute of Mental Health (R01
MH069353-01).
5. Conclusions
In this article, we have developed a general model for sample size determination for multicenter longitudinal studies.
This methodology includes multicenter randomized longitudinal clinical trials with randomization at either the subject
level or the center level. We consider two-group and multiplegroup comparisons. With respect to time, we considered any
possible set of contrasts, but focused in particular on polynomial contrasts, with a linear trend model as the simplest case,
and simple contrasts to baseline as a form of proﬁle analysis.
Our model is also general with respect to treatment group
allocation proportions, as well as dropout rates, which are
allowed to vary both between groups and over time.
A very important ﬁnding of this study is that for multicenter longitudinal studies with subject-level randomization,
the center-level variance components play no role whatsoever
Ahn, C., Overall, J. E., and Tonidandel, S. (2001). Sample size
and power in repeated measurement analysis. Computer
Methods and Programs in Biomedicine 64, 121–124.
Bloch, D. A. (1986). Sample size requirements and the cost of
a randomized clinical trial with repeated measurement.
Statistics in Medicine 5, 663–667.
Diggle, P. J., Heagerty, P., Liang, K. Y., and Zeger, S. L.
(2002). Analysis of Longitudinal Data. New York: Oxford
University Press.
Donner, A. (1992). Sample size requirements for cluster randomization designs. Statistics in Medicine 11, 743–750.
Donner, A. and Klar, N. (2000). Design and Analysis of Cluster
Randomised Trials in Health Research. London: Arnold.
Fitzmaurice, G. M., Laird, N. M., and Ware, J. H. (2004).
Applied Longitudinal Analysis. New York: Wiley.
Acknowledgements
References
Sample Size Determination for Hierarchical Longitudinal Designs
Hedeker, D. and Gibbons, R. D. (2006). Longitudinal Data
Analysis. New York: Wiley.
Hedeker, D., Gibbons, R. D., and Waternaux, C. (1999). Sample size estimation for longitudinal designs with attrition:
Comparing time-related contrasts between two groups.
Journal of Educational and Behavioral Statistics 24, 70–
93.
Helms, R. W. (1992). Intentionally incomplete longitudinal
designs: Methodology and comparison of some full span
designs. Statistics in Medicine 11, 1889–1913.
Kenward, M. G. and Roger, J. H. (1997). Small sample inference for ﬁxed-eﬀects from restricted maximum likelihood. Biometrics 53, 983–997.
Kirby, A. J., Galai, N., and Munoz, A. (1994). Sample size
estimation using repeated measurements on biomarkers
as outcomes. Controlled Clinical Trials 15, 165–172.
Kish, L. and Frankel, M. R. (1974). Inference from complex
samples. Journal of the Royal Statistical Society, Series B
36, 1–37.
Liu, G. and Liang, K. Y. (1997). Sample size calculations
for studies with correlated observations. Biometrics 53,
937–947.
Lorr, M. and Klett, C. J. (1966). Inpatient Multidimensional
Psychiatric Scale: Manual. Palo Alto, California: Consulting Psychologists Press.
9
Lui, K. J. and Cumberland, W. G. (1992). Sample size
requirement for repeated measurements in continuous
data. Statistics in Medicine 11, 633–641.
Muller, K. E. and Barton, C. N. (1989). Approximate power
for repeated measures ANOVA lacking sphericity. Journal of the American Statistical Association 84, 549–555.
Muller, K. E., LaVange, L. M., Ramey, S. L., and Ramey,
C. T. (1992). Power calculations for general linear multivariate models including repeated measures applications.
Journal of the American Statistical Association 87, 1209–
1226.
Overall, J. E. and Doyle, S. R. (1994). Estimating sample
sizes for repeated measurement designs. Controlled Clinical Trials 15, 100–123.
Raudenbush, S. W. and Liu, X. F. (2000). Statistical power
and optimal design for multisite randomized trials. Psychological Methods 5, 199–213.
Snijders, T. A. B. and Bosker, R. J. (1999). Multilevel Analysis. London: Sage Publications.
Verbeke, G. and Molenberghs, G. (2000). Linear Mixed Models
for Longitudinal Data. New York: Springer.
Received May 2006. Revised September 2006.
Accepted November 2006.