3 Description

In a row and column design the experimental material can be characterized by a two-way classification, nominally called rows and columns. Each experimental unit can be considered as being located in a particular row and column. It is assumed that all rows are of the same length and all columns are of the same length. Sets of equal numbers of rows and columns can be grouped together to form replicates, sometimes known as squares or rectangles, as appropriate.

If for a replicate, the number of rows, the number of columns and the number of treatments are equal and every treatment occurs once in each row and each column then the design is a Latin square. If this is not the case the treatments will be non-orthogonal to rows and columns. For example in the case of a lattice square each treatment occurs only once in each square.

For a row and column design, with t treatments in r rows and c columns and b replicates or squares with n=brc observations, the linear model is:

yijkl=μ+βi+ρj+γk+τl+eijk

i=1,2,…,b; ​j=1,2,…,r;k=1,2,…,c; ​l=1,2,…,t, where βi is the effect of the ith replicate, ρj is the effect of the jth row, γk is the effect of the kth column and the ijkl notation indicates that the lth treatment is applied to the unit in row j, column k of replicate i.

To compute the analysis of variance for a row and column design the mean is computed and subtracted from the observations to give, yijkl′=yijkl-μ^. Since the replicates, rows and columns are orthogonal the estimated effects, ignoring treatment effects, β^i, ρ^j, γ^k, can be computed using the appropriate means of the yijkl′, and the unadjusted sum of squares computed as the appropriate sum of squared totals for the yijkl′ divided by number of units per total. The observations adjusted for replicates, rows and columns can then be computed by subtracting the estimated effects from yijkl′ to give yijkl′′.

In the case of a Latin square design the treatments are orthogonal to replicates, rows and columns and so the treatment effects, τ^l, can be estimated as the treatment means of the adjusted observations, yijkl′′. The treatment sum of squares is computed as the sum of squared treatment totals of the yijl′′ divided by the number of times each treatment is replicated. Finally the residuals, and hence the residual sum of squares, are given by, rijl=yijl′′-τ^l.

For a design which is not orthogonal, for example a lattice square or an incomplete Latin square, the treatment effects adjusted for replicates, rows and columns need to be computed. The adjusted treatment effects are found as the solution to the equations:

Aτ^=R-NbNbT/rc-NrNrT/bc-NcNcT/brτ^=q

where q is the vector of the treatment totals of the observations adjusted for replicates, rows and columns, yijkl′′; R is a diagonal matrix with Rll equal to the number of times the lth treatment is replicated, and Nb is the t by b incidence matrix, with Nl,i equal to the number of times treatment l occurs in replicate i, with Nr and Nc being similarly defined for rows and columns. The solution to the equations can be written as:

τ^=Ωq

where, Ω is a generalized inverse of A. The solution is found from the eigenvalue decomposition of A. The residuals are first calculated by subtracting the estimated adjusted treatment effects from the adjusted observations to give rijl′=yijl′′-τ^l. However, since only the unadjusted replicate, row and column effects have been removed and they are not orthogonal to treatments, the replicate, row and column means of the rijl′ have to be subtracted to give the correct residuals, rijl and residual sum of squares.

Given the sums of squares, the mean squares are computed as the sums of squares divided by the degrees of freedom. The degrees of freedom for the unadjusted replicates, rows and columns are b-1, r-1 and c-1 respectively and for the Latin square designs the degrees of freedom for the treatments is t-1. In the general case the degrees of freedom for treatments is the rank of the matrix Ω. The F-statistic given by the ratio of the treatment mean square to the residual mean square tests the hypothesis:

H0:τ1=τ2=⋯=τt=0.

The standard errors for the difference in treatment effects, or treatment means, for Latin square designs, are given by:

seτ^j-τ^j*=2s2/bt

where s2 is the residual mean square. In the general case the variances of the treatment effects are given by:

Varτ^=Ωs2

from which the appropriate standard errors of the difference between treatment effects or the difference between adjusted means can be calculated.

The analysis of a row-column design can be considered as consisting of different strata: the replicate stratum, the rows within replicate and the columns within replicate strata and the units stratum. In the Latin square design all the information on the treatment effects is given at the units stratum. In other designs there may be a loss of information due to the non-orthogonality of treatments and replicates, rows and columns and information on treatments may be available in higher strata. The efficiency of the estimation at the units stratum is given by the (canonical) efficiency factors, these are the nonzero eigenvalues of the matrix, A, divided by the number of replicates in the case of equal replication, or by the mean of the number of replicates in the unequally replicated case, (see John (1987)). If more than one eigenvalue is zero then the design is said to be disconnected and information on some treatment comparisons can only be obtained from higher strata.

4 References

John J A and Quenouille M H (1977) Experiments: Design and Analysis Griffin

Searle S R (1971) Linear Models Wiley

5 Arguments

1:
nrep – IntegerInput

On entry: the number of replicates, b.

Constraint:
nrep≥1.

2:
nrow – IntegerInput

On entry: the number of rows per replicate, r.

Constraint:
nrow≥2.

3:
ncol – IntegerInput

On entry: the number of columns per replicate, c.

Constraint:
ncol≥2.

4:
y[nrep×nrow×ncol] – const doubleInput

On entry: the n=brc observations ordered by columns within rows within replicates. That is y[rci-1+rj-1+k-1] contains the observation from the k column of the jth row of the ith replicate, i=1,2,…,b; ​j=1,2,…,r and k=1,2,…,c.

5:
nt – IntegerInput

On entry: the number of treatments. If only replicates, rows and columns are required in the analysis then set nt=1.

Constraint:
nt≥1.

6:
it[×] – const IntegerInput

On entry: if nt>1, it[i-1] indicates which of the nt treatments unit i received, i=1,2,…,n. If nt=1, it is not referenced.

On exit: if nt≥2, tmean[l-1] contains the (adjusted) mean for the lth treatment, μ^*+τ^l, l=1,2,…,t, where μ^* is the mean of the treatment adjusted observations yijkl-τ^l. Otherwise tmean is not referenced.

9:
table[6×5] – doubleOutput

Note: the i,jth element of the matrix is stored in table[i-1×5+j-1].

On exit: the analysis of variance table. Column 1 contains the degrees of freedom, column 2 the sum of squares, and where appropriate, column 3 the mean squares, column 4 the F-statistic and column 5 the significance level of the F-statistic. Row 1 is for replicates, row 2 for rows, row 3 for columns, row 4 for treatments (if nt>1), row 5 for residual and row 6 for total. Mean squares are computed for all but the total row, F-statistics and significance are computed for treatments, replicates, rows and columns. Any unfilled cells are set to zero.

10:
c[nt×tdc] – doubleOutput

On exit: the upper triangular part of c contains the variance-covariance matrix of the treatment effects, the strictly lower triangular part contains the standard errors of the difference between two treatment effects (means), i.e., c[i-1×tdc+j-1] contains the covariance of treatment i and j if j≥i and the standard error of the difference between treatment i and j if j<i, i=1,2,…,t and j=1,2,…,t.

11:
tdc – IntegerInput

On entry: the stride separating matrix column elements in the array c.

The design is disconnected, the standard errors may not be valid. The design may have a nested structure.

NE_G04BC_REPS

The treatments are totally confounded with replicates, rows and columns, so the treatment sum of squares and degrees of freedom are zero. The analysis of variance table is not computed, except for replicate, row, column, total sum of squares and degrees of freedom.

NE_G04BC_RESD

The residual degrees of freedom or the residual sum of squares are zero, columns 3, 4 and 5 of the analysis of variance table will not be computed and the matrix of standard errors and covariances, c, will not be scaled.

NE_G04BC_ST_ERR

A computed standard error is zero due to rounding errors, or the eigenvalue computation failed to converge. Both are unlikely errors.

NE_INT_ARG_LT

On entry, irdf=value.
Constraint: irdf≥0.

On entry, ncol=value.
Constraint: ncol≥2.

On entry, nrep=value.
Constraint: nrep≥1.

On entry, nrow=value.
Constraint: nrow≥2.

On entry, nt=value.
Constraint: nt≥1.

NE_INTERNAL_ERROR

An internal error has occurred in this function. Check the function call
and any array sizes. If the call is correct then please contact NAG for
assistance.

7 Accuracy

The algorithm used in nag_anova_row_col (g04bcc), described in Section 3, achieves greater accuracy than the traditional algorithms based on the subtraction of sums of squares.

8 Further Comments

To estimate missing values the Healy and Westmacott procedure or its derivatives may be used (see John and Quenouille (1977)). This is an iterative procedure in which estimates of the missing values are adjusted by subtracting the corresponding values of the residuals. The new estimates are then used in the analysis of variance. This process is repeated until convergence. A suitable initial value may be the grand mean. When using this procedure irdf should be set to the number of missing values plus one to obtain the correct degrees of freedom for the residual sum of squares.

For analysis of covariance the residuals are obtained from an analysis of variance of both the response variable and the covariates. The residuals from the response variable are then regressed on the residuals from the covariates using, say, nag_regress_confid_interval (g02cbc) or nag_regsn_mult_linear (g02dac). The results from those functions can be used to test for the significance of the covariates. To test the significance of the treatment effects after fitting the covariate, the residual sum of squares from the regression should be compared with the residual sum of squares obtained from the equivalent regression but using the residuals from fitting replicates, rows and columns only.

9 Example

The data for a 5×5 Latin square is input and the ANOVA and treatment means computed and printed. Since the design is orthogonal only one standard error need be printed