Multivariate linear mixed models: livin’ la vida loca

I swear there was a point in writing an introduction to covariance structures: now we can start joining all sort of analyses using very similar notation. In a previous post I described simple (even simplistic) models for a single response variable (or ‘trait’ in quantitative geneticist speak). The R code in three R packages (asreml-R, lme4 and nlme) was quite similar and we were happy-clappy with the consistency of results across packages. The curse of the analyst/statistician/guy who dabbles in analyses is the idea that we can always fit a nicer model and—as both Bob the Builder and Obama like to say—yes, we can.

Let’s assume that we have a single trial where we have assessed our experimental units (trees in my case) for more than one response variable; for example, we measured the trees for acoustic velocity (related to stiffness) and basic density. If you don’t like trees, think of having height and body weight for people, or whatever picks your fancy. Anyway, we have our traditional randomized complete block design with random blocks, random families and that’s it. Rhetorical question: Can we simultaneously run an analysis for all responses?

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

setwd('~/Dropbox/quantumforest')

library(asreml)

canty=read.csv('canty_trials.csv',header=TRUE)

summary(canty)

m1=asreml(bden~1,random=~Block+Family,data=canty)

summary(m1)$varcomp

gamma component std.errorz.ratio constraint

Block!Block.var0.2980766162.7438378.492712.073362Positive

Family!Family.var0.151659182.8028229.471532.809587Positive

R!variance1.0000000545.9798337.1832314.683496Positive

m2=asreml(veloc~1,random=~Block+Family,data=canty)

summary(m2)$varcomp

gamma component std.errorz.ratio constraint

Block!Block.var0.12558460.0021862950.00117749061.856741Positive

Family!Family.var0.12904890.0022466050.00083113412.703059Positive

R!variance1.00000000.0174089460.001200413614.502456Positive

Up to this point we are using the same old code, and remember that we could fit the same model using lme4, so what’s the point of this post? Well, we can now move to fit a multivariate model, where we have two responses at the same time (incidentally, below we have a plot of the two response variables, showing a correlation of ~0.2).

We can first refit the model as a multivariate analysis, assuming block-diagonal covariance matrices. The notation now includes:

The use of cbind() to specify the response matrix,

the reserved keyword trait, which creates a vector to hold the overall mean for each response,

at(trait), which asks ASReml-R to fit an effect (e.g. Block) at each trait, by default using a diagonal covariance matrix (σ2I). We could also use diag(trait) for the same effect,

rcov = ~ units:diag(trait) specifies a different diagonal matrix for the residuals (units) of each trait.

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

m3=asreml(cbind(bden,veloc)~trait,

random=~at(trait):Block+at(trait):Family,data=canty,

rcov=~units:diag(trait))

summary(m3)$varcomp

gamma component std.error

at(trait,bden):Block!Block.var1.627438e+021.627438e+0278.492736507

at(trait,veloc):Block!Block.var2.186295e-032.186295e-030.001177495

at(trait,bden):Family!Family.var8.280282e+018.280282e+0129.471507439

at(trait,veloc):Family!Family.var2.246605e-032.246605e-030.000831134

R!variance1.000000e+001.000000e+00NA

R!trait.bden.var5.459799e+025.459799e+0237.183234014

R!trait.veloc.var1.740894e-021.740894e-020.001200414

z.ratio constraint

at(trait,bden):Block!Block.var2.073362Positive

at(trait,veloc):Block!Block.var1.856733Positive

at(trait,bden):Family!Family.var2.809589Positive

at(trait,veloc):Family!Family.var2.703059Positive

R!variance NAFixed

R!trait.bden.var14.683496Positive

R!trait.veloc.var14.502455Positive

Initially, you may not notice that the results are identical, as there is a distracting change to scientific notation for the variance components. A closer inspection shows that we have obtained the same results for both traits, but did we gain anything? Not really, as we took the defaults for covariance components (direct sum of diagonal matrices, which assumes uncorrelated traits); however, we can do better and actually tell ASReml-R to fit the correlation between traits for block and family effects as well as for residuals.

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

m4=asreml(cbind(bden,veloc)~trait,

random=~us(trait):Block+us(trait):Family,data=a,

rcov=~units:us(trait))

summary(m4)$varcomp

gamma component std.error

trait:Block!trait.bden:bden1.628812e+021.628812e+027.854123e+01

trait:Block!trait.veloc:bden1.960789e-011.960789e-012.273473e-01

trait:Block!trait.veloc:veloc2.185595e-032.185595e-031.205128e-03

trait:Family!trait.bden:bden8.248391e+018.248391e+012.932427e+01

trait:Family!trait.veloc:bden1.594152e-011.594152e-011.138992e-01

trait:Family!trait.veloc:veloc2.264225e-032.264225e-038.188618e-04

R!variance1.000000e+001.000000e+00NA

R!trait.bden:bden5.460010e+025.460010e+023.712833e+01

R!trait.veloc:bden6.028132e-016.028132e-011.387624e-01

R!trait.veloc:veloc1.710482e-021.710482e-029.820673e-04

z.ratio constraint

trait:Block!trait.bden:bden2.0738303Positive

trait:Block!trait.veloc:bden0.8624639Positive

trait:Block!trait.veloc:veloc1.8135789Positive

trait:Family!trait.bden:bden2.8128203Positive

trait:Family!trait.veloc:bden1.3996166Positive

trait:Family!trait.veloc:veloc2.7650886Positive

R!variance NAFixed

R!trait.bden:bden14.7057812Positive

R!trait.veloc:bden4.3442117Positive

R!trait.veloc:veloc17.4171524Positive

Moving from model 3 to model 4 we are adding three covariance components (one for each family, block and residuals) and improving log-likelihood by 8.5. A quick look at the output from m4 would indicate that most of that gain is coming from allowing for the covariance of residuals for the two traits, as the covariances for family and, particularly, block are more modest:

1

2

3

4

5

summary(m3)$loglik

[1]-1133.312

summary(m4)$loglik

[1]-1124.781

An alternative parameterization for model 4 would be to use a correlation matrix with heterogeneous variances (corgh(trait) instead of us(trait)), which would model the correlation and the variances instead of the covariance and the variances. This approach seems to be more numerically stable sometimes.

As an aside, we can estimate the between traits correlation for Blocks (probably not that interesting) and the one for Family (much more interesting, as it is an estimate of the genetic correlation between traits: 1.594152e-01 / sqrt(8.248391e + 01*2.264225e-03) = 0.37).

4 thoughts on “Multivariate linear mixed models: livin’ la vida loca”

This post is very helpful. I tried to analyze my data using your method but I have a problem. My experiment was also RCBD including 2 blocks, and in each block, 159 wheat RILs were randomly planted. Heading date (HD), plant height (HT), grain yield (GY) etc were collected. I want to calculate the genetic correlation of these traits. My model is the same as yours:

Sorry on the delay, but I haven’t checked this post in a while. I think that either you have poor starting values for the variance components or that your data structure leads to some variance components being negative (see this example for the latter).

If it is a problem of starting values, try first running all three bivariate analyses (HD and HT, HD and GY, HT and GY) and use those results as starting values. Have a look at the asreml-R manual to provide starting values using the init option, as in us(trait, init=c(stv1, stv2, ..., stv6)).

in the original formula genetic correlation is covariance of two variables divided by square roots of their variances. In here it is a some kind of variance between bden and velov that can be transformed to variance by root square? Or is there any other explanation. Covariance is a square root of variance, is not it?