0

I have some function:

def V(x):
    a = 0
    g = 0
    return 1/6*(3*x[0]**2+3*x[1]**2+6*x[0]**2*x[1]-2*x[1]**3)+a*(2*x[0]**6*(x[1]+1)+x[0]**4*x[1]**2*(2*x[1]+1)+x[0]**2*x[1]**4*(2*x[1]+1)+2*x[1]**6*(x[1]+1))+g*(x[0]**4*(x[1]-1)+x[0]**2*(x[1]-2)*x[1]**2-x[1]**4*(x[1]+1))

and I want to see when the partial derivatives with respect to x and y of V are 0. (x[0] is x and x[1] is y). I decided to use to scipy minimize function for this, so I came up with this function:

def LBFGS(x, y):
    x0 = np.array([x, y])
    res = minimize(V, x0, method='L-BFGS-B').x.tolist()
    return res[0], res[1]

This takes in initial values and outputs the x and y coordinates of where V is minimized. Now the problem came when I started testing this function. One can easily check that (0, 0) is a place where the the partial derivatives with respect to x and y of V are 0. When I plugged in LBFGS(0, 0), it worked. However when I plugged in LBFGS(0, 0.1), I got (-4.9999383996738455e-09, 8.936905459196604e-08). Why am I getting weird answers when I should be getting (0, 0)?

Another example is when I plug in LBFGS(0, 1.01) which should out put (0, 1) but I get

RuntimeWarning: overflow encountered in double_scalars
  return 1/6*(3*x[0]**2+3*x[1]**2+6*x[0]**2*x[1]-2*x[1]**3)+a*(2*x[0]**6*(x[1]+1)+x[0]**4*x[1]**2*(2*x[1]+1)+x[0]**2*x[1]**4*(2*x[1]+1)+2*x[1]**6*(x[1]+1))+g*(x[0]**4*(x[1]-1)+x[0]**2*(x[1]-2)*x[1]**2-x[1]**4*(x[1]+1))
RuntimeWarning: invalid value encountered in double_scalars
  return 1/6*(3*x[0]**2+3*x[1]**2+6*x[0]**2*x[1]-2*x[1]**3)+a*(2*x[0]**6*(x[1]+1)+x[0]**4*x[1]**2*(2*x[1]+1)+x[0]**2*x[1]**4*(2*x[1]+1)+2*x[1]**6*(x[1]+1))+g*(x[0]**4*(x[1]-1)+x[0]**2*(x[1]-2)*x[1]**2-x[1]**4*(x[1]+1))
(-138.778, 1.3510798882111488e+26)
Dan
  • 35
  • 4
  • 2
    How far is it really away from 0? It's an (somewhat accurate) approximation of (0,0). All iterative-solvers (not closed-form solutions) usually only *approximate* solutions. And yours even did use numerical-differentiation to land there -> not helping much to keep *accuracy* although the solver has termination-criteria to reason about such (status and logs of solver might give away the termination-crtieria chosen: aka what makes the solver think it's the correct solution). – sascha Sep 21 '21 at 23:54
  • Relative to 0.1, -5e-9 *IS* zero. – Tim Roberts Sep 22 '21 at 03:37
  • Wait also could someone explain what is happening in the second example? – Dan Sep 22 '21 at 04:08
  • Second example is numerical trouble / instability, not uncommon when there are fast-growing functions combined with non-tuned numerical-differentiation. I would turn on logging and inspect the values in a callback. It might also help to provide variable-bounds which this solver supports. – sascha Sep 22 '21 at 11:00

0 Answers0