I am using from scipy.optimize import minimize
to minimize a function subject to two constraints. I have been using the trust-constr
method, which takes the value, gradient and the Hessian of the function.
However, in my case, the Hessian may sometimes develop negative eigenvalues (i.e. no longer positive definite). But the algorithm still needs to go downhill and not converge to a saddle point (which would be the case for Newton or quasi-Newton methods). Does the trust-constr
optimisation method guarantee that?