0

I am using optim to solve an optimization problem. It is a standard minimization of L2 norm, so it is positive, well behaved. However, when using optim, I run into some issues. First I call

> lambda2
$par
[1]  4.6105840762  0.1268444008 -0.8488319926 -0.1734999439 -0.8090172550 -0.0006518246

$value
[1] 0.004230469

$counts
function gradient 
    1317       NA 

$convergence
[1] 0

$message
NULL

Which seems fine. However, lambda2$value is still large. If I run another iteration, using lambda2$par, I get a better result. As you can see from the results, the number of iterations has not reached maxit and the tolerance is still above the desired one. My tolerance is really low, pretty much unreachable, but I would like to run more iterations.

Here is the outcome if I call the function again

> lambda2=optim(lambda2$par,fn,control = list(maxit=100000,abstol=10e-08))
> lambda2
$par
[1]  3.8098969475  0.1892906218 -1.4387655921 -0.3345618667 -1.2719792359 -0.0000129872

$value
[1] 0.001079045

$counts
function gradient 
    1529       NA 

$convergence
[1] 0

$message
NULL

I found the following question, but the suggested solution only work for one method of optim, therefore I was not able to apply it to my problem. How can I force r optim to run more iterations?

  • 1
    Maybe adjust `reltol`? Setting `trace = 1` should already give some useful information. You might want to create profile plots for the fit. – Roland Mar 06 '23 at 13:43
  • How do we know " the tolerance is still above the desired one" ? The output from `optim()` only shows the achieved value of the objective function, not the achieved (relative or absolute) tolerance. Did you set `abstol` in the `control=` argument? Have you considered using `method = "BFGS"` instead of the default Nelder-Mead? Can you supply a gradient function? – Ben Bolker Mar 06 '23 at 14:47
  • @BenBolker I did set `abstol=10e-8.` I did consider using `"BFGS"`, but it takes longer and the results are not better. Finally, it is possible to find a gradient function, however I did not supply it. I solved the same problem years ago with MATLAB's `fmincon` and it worked without gradient. – Osvaldo Assunção Mar 06 '23 at 15:01
  • 1
    In my limited experience, Matlab's optimization functions are uncannily good (my working theory is that since Matlab is commercial/has to satisfy customers, they have put enormous efforts into tuning and improving their optimization algorithms). If you can give a *minimal* example we might be able to give some more guidance. What happens if you set `abstol` even lower? Maybe try the Nelder-Mead or BOBYQA algorithms from the `nloptr` package? – Ben Bolker Mar 06 '23 at 15:18

0 Answers0