I have a bit of a strange question. I ran the following model, which includes as one of the predictors 'Valence.c'. This is predictor coded as '0' or '1', representing 'positive' and 'negative'. The predictor was centered so is actually '-0.5'and '0.5'.
> loss.1 <- glmer.nb(Loss_across.Chain ~ Posn.c*Valence.c + (Valence.c|mood.c/Chain), data = FinalData_forpoisson, control = glmerControl(optimizer = "bobyqa", check.conv.grad = .makeCC("warning", 0.05)))
I got the following output:
Generalized linear mixed model fit by maximum likelihood (Laplace Approximation) ['glmerMod']
Family: Negative Binomial(4.9852) ( log )
Formula: Loss_across.Chain ~ Posn.c * Valence.c + (Valence.c | mood.c/Chain)
Data: FinalData_forpoisson
Control: ..3
AIC BIC logLik deviance df.resid
1894.7 1945.3 -936.4 1872.7 725
Scaled residuals:
Min 1Q Median 3Q Max
-1.3882 -0.7225 -0.5190 0.4375 7.1873
Random effects:
Groups Name Variance Std.Dev. Corr
Chain:mood.c (Intercept) 8.782e-15 9.371e-08
Valence.c 9.608e-15 9.802e-08 0.48
mood.c (Intercept) 0.000e+00 0.000e+00
Valence.c 1.654e-14 1.286e-07 NaN
Number of obs: 736, groups: Chain:mood.c, 92; mood.c, 2
Fixed effects:
Estimate Std. Error z value Pr(>|z|)
(Intercept) -0.19255 0.04794 -4.016 5.92e-05 ***
Posn.c -0.61011 0.04122 -14.800 < 2e-16 ***
Valence.c -0.27372 0.09589 -2.855 0.00431 **
Posn.c:Valence.c 0.38043 0.08245 4.614 3.95e-06 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Correlation of Fixed Effects:
(Intr) Posn.c Vlnc.c
Posn.c 0.491
Valence.c 0.029 -0.090
Psn.c:Vlnc. -0.090 0.062 0.491
As the fixed effect for Valence.c was negative I thought I would try re-code the variable so that positive was now '0.5' and negative was now '-0.5'. I thought explaining an increase in the incident rate would be easier than explaining a decrease. So I ran this model which is the same, except the datafile it calls has the reverse codings:
> loss.2 <- glmer.nb(Loss_across.Chain ~ Posn.c*Valence.c + (Valence.c|mood.c/Chain), data = LossAnalysis_ValenceCodingReversed, control = glmerControl(optimizer = "bobyqa", check.conv.grad = .makeCC("warning", 0.05)))
I got this warning message:
Warning messages:
1: In checkConv(attr(opt, "derivs"), opt$par, ctrl = control$checkConv, :
unable to evaluate scaled gradient
2: In checkConv(attr(opt, "derivs"), opt$par, ctrl = control$checkConv, :
Model failed to converge: degenerate Hessian with 1 negative eigenvalues
Why would changing the reference group mean that the model now fails to converge?? I have the same number of observations for positive and negative. Any help would be great!
Thanks