0

I set a loss function, but the parameters I want to optimize are not so sensitive to the loss function, so it often happens that the result from loss function does not change. Causes a very slow convergence. Therefore I want to adjust the step sizes for the finite-difference approximation to the gradient. Just like 'ndeps' in optim(). In addition ,I see claims about 'L-BFGS-B' in optim() does not implement the improvements Nocedal and Morales published in 2011, so I want to try lbfgsb3c().

yiboliu
  • 33
  • 5

0 Answers0