1

This might be a better post for "Cross Validated" so I will post it there too, so I apologize if this is out of the scope of this discussion board.

I am working with the lme4 package and would like to test the significance of the random effect. I am using the RLRsim package to generate an estimate of the p-value, consistent with recommendations in the documentation for lme4. Within RLRsim, I will use the exactRLRT to test the significance of a random slope.

First, I estimated the full model:

data(sleepstudy, package = "lme4")
x <- lme4::lmer(Reaction ~ I(Days-4.5) + (I(Days-4.5)|Subject), 
  data = sleepstudy)

Next, I want to test the significance of the random effect on I(Days-4.5)

m0 <- update(x, . ~ . -(I(Days-4.5)|Subject) + (1|Subject))
m.slope  <- update(x, . ~ . - (I(Days-4.5)|Subject) + (0 + I(Days-4.5)|Subject))
#test for subject specific slopes:
exactRLRT(m.slope, x, m0)

However, this leads to an error message Random effects not independent - covariance(s) set to 0 under H0. exactRLRT can only test a single variance.

So evidently I cannot use this method when the random effects are not independent. I can re-specify the model:

mA <- lme4::lmer(Reaction ~ I(Days-4.5) + (1|Subject) + (0 + I(Days-4.5)|Subject), 
  data = sleepstudy)
m0 <- update(mA, . ~ . - (0 + I(Days-4.5)|Subject))
m.slope  <- update(mA, . ~ . - (1|Subject))
#test for subject specific slopes:
exactRLRT(m.slope, mA, m0)

and this is able to give me an estimate. However, my understanding is that running the random effects as independent is a fundamentally different model than the one I originally estimated, so I am certain this p-value from the output is not valid.

I am not sure what I can do (if anything) to test the significance of the random effect if they are not assumed to be independent. I am using the sleepstudy data here as an example, but my actual dataset runs into convergence problems when I estimate the random effects as independent.

Any insight on this would be greatly appreciated!

Update

Here is a link to the question I posted on "Cross Validated".

z_11122
  • 153
  • 8
  • If you want to allow for correlation between your random slope and random intercept, you just have to include them in the same parenthesis: `Reaction ~ I(Days-4.5) + (1 + I(Days-4.5) | Subject)` Have you tried this yet? – benimwolfspelz Nov 05 '21 at 07:39

1 Answers1

1

It's much slower, but parametric bootstrapping should work for this. (Although in this case it's still pretty fast.)

library(pbkrtest)
system.time(pp <- PBmodcomp(largeModel = x, smallModel = m0))
##    user  system elapsed 
##  53.718   0.283  10.967 

Weirdly, it seems to be using multiple cores on my machine, which I don't think it should do without asking ...

> pp
Bootstrap test; time: 8.50 sec; samples: 1000; extremes: 0;
large : Reaction ~ I(Days - 4.5) + (I(Days - 4.5) | Subject)
Reaction ~ I(Days - 4.5) + (1 | Subject)
         stat df   p.value    
LRT    42.114  2 7.164e-10 ***
PBtest 42.114     0.000999 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Ben Bolker
  • 211,554
  • 25
  • 370
  • 453