1

I am quite confused about random-effect part of a within-subject repeated measures experiment. I've read several articles and posts but there are different perspectives. Basically, I have an experiment with 2 groups (control, experimental) and 1 within-subject factor (stimulus type) and 20 trials in each condition. So each subject in both groups perform all conditions.

library(tidyverse)

within1 <- c("a", "b", "c")
rept <- 1:20 # trials
id <- 1:10 #10 subjects in each group
group <- c("control", "experiment") # group

temp <- expand_grid(id, within1, rept)

dat <- temp %>% 
  bind_rows(temp, .id = "group") %>% 
  mutate(group = ifelse(group == "1", "control", "exp"),
         y = rnorm(nrow(.))) %>% # random response
  select(-rept)

Now, using a standard repeated measure anova (e.g. using afex::aov_car()) the formula should be:

library(afex)
aov_car(y ~ within1 * group + Error(id/within1), data = dat)

If I want to use a more flexible mixed models approach, using the lme4 package, I would write this model as:

library(lme4)
contrasts(dat$within1) <- contr.sum
contrasts(dat$group) <- contr.sum
lmer(y ~ within1 * group + (1|id), data = dat)

My questions are:

  • Does the random effect specification ((1|id)) is correct to deal with not only repeated observation of the same subject (multiple trials) but also within-subjects factors?
  • Maybe the final goal should not replicate aov() results, given that mixed-models relaxed some anova assumptions and requirements, however I am concerned about making some conceptual errors about within-subjects factors not specified in my lmer formula.

1 Answers1

0

Adding (1|id) adds a random intercept term to your model. With some datasets, this is enough to account for the variation tied to the person. This also solves the problem of repeated measures within individual. However, best practice is to add more random terms and assess the change in deviance. Andy can tell you why. Good luck!

Magnus Nordmo
  • 923
  • 7
  • 10