6

I am fitting a linear mixed effects model using lme4:

library(lme4)
data(Orthodont)
dent <- Orthodont
d.test <- lmer(distance ~ age + (1|Subject), data=dent)

If we say generically Y = X * B + Z * d + e is the form of a linear mixed effects model, then I am trying to get Var(Y) = Z * Var(d) * Z^t + Var(e) from the results of the model.

Is the following formulation the right way to do this?

k <- table(dent$Subject)[1]
vars <- VarCorr(d.test)
v <- as.data.frame(vars)
sigma <- attr(vars, "sc")
s.tech <- diag(v$vcov[1], nrow=k)
icc <- v$vcov[1]/sum(v$vcov)
s.tech[upper.tri(s.tech)] <- icc
s.tech[lower.tri(s.tech)] <- icc
sI <- diag(sigma^2, nrow=length(dent$age))
var.b <- kronecker(diag(1, nrow=length(dent$age)/k), s.tech)
var.y <- sI + var.b

I think this is a simple question, but I can't find anywhere code for doing this, so I'm asking if I'm doing it right.

learner
  • 1,895
  • 2
  • 19
  • 21
  • Running your code throws an error because you are referring to an `x` object that is not defined (in `sI <- diag(sigma^2, nrow=nrow(x))`). In any case, you can get the variance-covariance matrix of any `merMod` object (which is what `lmer` generates) with the `vcov` method. In your case: `vcov(d.test)`. This only provides the matrix for the fixed effects though. – Oriol Mirosa Aug 12 '17 at 14:32
  • @OriolMirosa Thanks for pointing out the error. However, `vcov`/`VarCorr` are not the answers I am looking for - those provide variances/covariance for fixed and random effects. I am trying to get the variance/covariance of the data `Y` – learner Aug 12 '17 at 14:45

1 Answers1

9

You can do this a bit more easily if you know about getME(), which is a general purpose extract-bits-of-a-lmer-fit function. In particular, you can extract the transposed Z matrix (getME(.,"Zt")) and the transposed Lambda matrix - the Lambda matrix is the Cholesky factor of the scaled variance-covariance matrix of the conditional models (BLUPs); in your notation, Var(d) is the residual variance times the cross-product of Lambda.

The answer cited here is pretty good but the answer below is slightly more general (it should work for any lmer fit).

Fit model:

library(lme4)
data(Orthodont,package="nlme")
d.test <- lmer(distance ~ age + (1|Subject), data=Orthodont)

Extract components:

var.d <- crossprod(getME(d.test,"Lambdat"))
Zt <- getME(d.test,"Zt")
vr <- sigma(d.test)^2

Combine them:

var.b <- vr*(t(Zt) %*% var.d %*% Zt)
sI <- vr * Diagonal(nrow(Orthodont))
var.y <- var.b + sI

A picture:

image(var.y)

enter image description here

Ben Bolker
  • 211,554
  • 25
  • 370
  • 453
  • 1
    Thanks, that's even better/more intuitive. Might I suggest this being a function in lme4? – learner Aug 12 '17 at 23:51
  • Dear @Ben ; may I ask a (hopefully) quick question please. I am wanting to visualise the variance components for a glmm -- does this answer extend to these models (eg `d.test <- glmer(distance>24 ~ age + (1|Subject), data=Orthodont, family=binomial)`). I suppose I'm wondering about the `sI <- vr * Diagonal(nrow(Orthodont))` term which I doubt is correct for the residual devaince. thank you – user2957945 Aug 22 '18 at 12:36
  • it's a little different because defining the residual variance is harder. You can use various papers/documents on intra-class correlation and R^2 (which have to define an analogue of residual/lowest-level variance) to work it out: Nakagawa and Schielzeth, J. Hadfield, etc. ... – Ben Bolker Aug 22 '18 at 13:32