I've successfully run a glmer model using mixed()
. It took a while to find a model that would converge as I have a number of variables, but the final model looks like this:
> head(data1)
# A tibble: 6 x 8
Speaker data_type learned_next AOP_scaled length_scaled PAT_scaled PAQ_scaled freq_scaled
<chr> <chr> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
1 Alex actual 0 -0.337 -2.34 -1.34 -0.345 -0.00436
2 Alex actual 0 -0.337 -0.989 -1.34 -0.345 -0.00436
3 Alex actual 0 -0.337 -2.34 -1.14 -0.345 -0.00436
4 Alex actual 0 -0.337 -0.989 -1.14 -0.345 -0.00436
5 Alex actual 0 -0.337 -2.34 -0.720 -0.345 -0.00436
6 Alex actual 0 -0.337 -0.989 -0.720 -0.345 -0.00436
model_max <- mixed(learned_next ~
PAQ_scaled * PAT_scaled * length_scaled * freq_scaled * AOP_scaled +
(1|Speaker),
family = binomial,
data = subset(data1, data_type == "actual"),
method = "LRT",
control=glmerControl(calc.derivs = FALSE,
optimizer="bobyqa",
optCtrl=list(maxfun=2e5)), # specifiying optimizer to support convergence
# (does not converge without this)
expand_re = TRUE)
The full model output from mixed()
looks like this (leaving out interactions to make it more manageable):
> model_max$full_model
Generalized linear mixed model fit by maximum likelihood (Laplace Approximation) ['glmerMod']
Family: binomial ( logit )
Formula: learned_next ~ PAQ_scaled * PAT_scaled * length_scaled * freq_scaled * AOP_scaled + (1 | Speaker)
Data: data
AIC BIC logLik deviance df.resid
14431.354 14692.583 -7182.677 14365.354 20219
Random effects:
Groups Name Std.Dev.
Speaker (Intercept) 0.6917
Number of obs: 20252, groups: Speaker, 5
Fixed Effects:
(Intercept)
PAQ_scaled
-0.805275
-0.063157
PAT_scaled
length_scaled
-0.831367
0.067195
freq_scaled
AOP_scaled
0.070104
-0.774926
But when I run gm_all <- afex::all_fit(model_max$full_model)
I get the following output:
> bobyqa. : [ERROR]
> Nelder_Mead. : [ERROR]
> optimx.nlminb : [ERROR]
> optimx.L-BFGS-B : [ERROR]
> nloptwrap.NLOPT_LN_NELDERMEAD : [ERROR]
> nloptwrap.NLOPT_LN_BOBYQA : [ERROR]
> nmkbw. : [ERROR]
I can't find any answers online about why these might all be coming back as errors, and given that the model outputs look ok (unless I'm missing something) I don't think it should be an issue with the model itself.