I set a loss function, but the parameters I want to optimize are not so sensitive to the loss function, so it often happens that the result from loss function does not change. Causes a very slow convergence. Therefore I want to adjust the step sizes for the finite-difference approximation to the gradient. Just like 'ndeps' in optim(). In addition ,I see claims about 'L-BFGS-B' in optim() does not implement the improvements Nocedal and Morales published in 2011, so I want to try lbfgsb3c().
Asked
Active
Viewed 40 times
0
-
can you please give us a simple reproducible example? – Ben Bolker Sep 26 '22 at 20:52