0

I am in the process of trying to run the following code and am continuously getting the same error:

> model5 <- glmer(violentyn~vpul + bmi_new + wmax + (1|fid),
    data = cohort4, family = binomial)

Warning messages:

1: In checkConv(attr(opt, "derivs"), opt$par, ctrl = control$checkConv, : Model failed to converge with max|grad| = 0.254024 (tol = 0.002, component 1)
2: In checkConv(attr(opt, "derivs"), opt$par, ctrl = control$checkConv, : Model is nearly unidentifiable: very large eigenvalue
Rescale variables?

A couple of details about the variables I am using: I am predicting violent behavior in sons (binary 0/1) from sons' resting heart rate (continuous), alongside BMI and physical energy capacity variables as covariates (also continuous). I am clustering on the family id variable. This is a very large population sized dataset with fathers and sons included, but currently this analysis is only utilizing son-variables.

Upon looking at ideas I also tried running the above code with this optimizer modification at the end: control=glmerControl(optimizer="bobyqa")) but am still getting the same error.

Does anyone have any thoughts on 1) why this is happening? or 2) things I can try to resolve this error?

When running allFit I am getting:

> summary(Newmodel)
$which.OK
                       bobyqa                   Nelder_Mead                    nlminbwrap               optimx.L-BFGS-B nloptwrap.NLOPT_LN_NELDERMEAD
                         TRUE                          TRUE                          TRUE                          TRUE                          TRUE
    nloptwrap.NLOPT_LN_BOBYQA
                         TRUE

$msgs
$msgs$bobyqa
$msgs$bobyqa[[1]]
[1] "Model failed to converge with max|grad| = 0.0492524 (tol = 0.002, component 1)"
$msgs$bobyqa[[2]]
[1] "Model is nearly unidentifiable: very large eigenvalue\n - Rescale variables?"
$msgs$Nelder_Mead

$msgs$Nelder_Mead[[1]]

[1] "Model failed to converge with max|grad| = 0.0208731 (tol = 0.002, component 1)"

 

$msgs$Nelder_Mead[[2]]

[1] "Model is nearly unidentifiable: very large eigenvalue\n - Rescale variables?"
$msgs$nlminbwrap

[1] "boundary (singular) fit: see help('isSingular')"
$msgs$`optimx.L-BFGS-B`

[1] "unable to evaluate scaled gradient"                                        "Model failed to converge: degenerate  Hessian with 1 negative eigenvalues"

 

$msgs$nloptwrap.NLOPT_LN_NELDERMEAD

[1] "unable to evaluate scaled gradient"                                        "Model failed to converge: degenerate  Hessian with 1 negative eigenvalues"
$msgs$nloptwrap.NLOPT_LN_BOBYQA

[1] "Model failed to converge with max|grad| = 17.0248 (tol = 0.002, component 1)"

 

 

$fixef

                              (Intercept)

bobyqa                         -12.325043

Nelder_Mead                    -12.326691

nlminbwrap                      -3.119328

optimx.L-BFGS-B                -12.328315

nloptwrap.NLOPT_LN_NELDERMEAD  -12.325046

nloptwrap.NLOPT_LN_BOBYQA      -11.525685

 

$llik

                       bobyqa                   Nelder_Mead                    nlminbwrap               optimx.L-BFGS-B nloptwrap.NLOPT_LN_NELDERMEAD

                    -7945.366                     -7945.358                    -15968.103                     -7945.366                     -7945.365

    nloptwrap.NLOPT_LN_BOBYQA

                    -7987.759

 

$sdcor

                              fid.(Intercept)

bobyqa                         28.77715048132

Nelder_Mead                    28.81300356231

nlminbwrap                      0.00004213222

optimx.L-BFGS-B                28.83265512110

nloptwrap.NLOPT_LN_NELDERMEAD  28.79536607386

nloptwrap.NLOPT_LN_BOBYQA      22.37746729938

 

$theta

                              fid.(Intercept)

bobyqa                         28.77715048132

Nelder_Mead                    28.81300356231

nlminbwrap                      0.00004213222

optimx.L-BFGS-B                28.83265512110

nloptwrap.NLOPT_LN_NELDERMEAD  28.79536607386

nloptwrap.NLOPT_LN_BOBYQA      22.37746729938

 

$times

                              user.self sys.self elapsed user.child sys.child

bobyqa                           169.55    12.90  182.62         NA        NA

Nelder_Mead                      240.92    18.10  259.18         NA        NA

nlminbwrap                         9.69     0.37   10.06         NA        NA

optimx.L-BFGS-B                  226.92    10.24  237.44         NA        NA

nloptwrap.NLOPT_LN_NELDERMEAD    136.09     5.62  141.89         NA        NA

nloptwrap.NLOPT_LN_BOBYQA         80.90     3.26   84.19         NA        NA

 

$feval

                       bobyqa                   Nelder_Mead                    nlminbwrap               optimx.L-BFGS-B nloptwrap.NLOPT_LN_NELDERMEAD

                          142                           191                            NA                            50                           103

    nloptwrap.NLOPT_LN_BOBYQA

                           96

 

attr(,"class")

[1] "summary.allFit"
Ben Bolker
  • 211,554
  • 25
  • 370
  • 453
Bridget
  • 1
  • 1
  • have you read the `?lme4::convergence` or `?lme4::troubleshooting` help pages yet ... ? – Ben Bolker Nov 08 '22 at 16:06
  • @BenBolker thank you! I have, and I was in the process of trying the "allFit" suggestion but that also hasn't been working. Some of the other suggestions are a little bit confusing to me (e.g., I'm not sure how to double check the Hessian calculation) – Bridget Nov 08 '22 at 16:17
  • Can you post some more information on the output of `allFit()`? In particular, note that the goal of allFit is **not** "let's see if we can find an optimizer that doesn't warn", but rather "let's see if the results from a bunch of different optimizers are sufficiently consistent that we feel comfortable making conclusions" – Ben Bolker Nov 08 '22 at 17:00
  • @BenBolker I will as soon as my collaborators have it! Another question I have is I'm confused as to why I'm having convergence problems in just the base model alone. Is this a common problem? – Bridget Nov 09 '22 at 15:27
  • it's not uncommon with very large data sets, for somewhat boring/painful historical/technical reasons. How big is a "very large" data set (how many total observations)? – Ben Bolker Nov 09 '22 at 15:51
  • about 200,000 :) There are fathers and sons in this sample that are clustered on a family ID variable – Bridget Nov 09 '22 at 15:57
  • @BenBolker I added the allFit output above - not sure what to make of it – Bridget Nov 09 '22 at 21:05
  • Greetings! Usually it is helpful to provide a minimally reproducible dataset for questions here so people can troubleshoot your problems. One way of doing this is by using the `dput` function. You can find out how to use it here: https://youtu.be/3EID3P1oisg – Shawn Hemelstrand Nov 15 '22 at 03:29
  • You could plot the results based on various optimizers. If the results are consistent across optimizers, they would be more trustworthy. A function and a code-through is available at: https://pablobernabeu.github.io/2021/a-new-function-to-plot-convergence-diagnostics-from-lme4-allfit/ – Pablo Bernabeu Jun 24 '23 at 11:16

0 Answers0