0

I am trying simple experiment to learn scipy's SLSQP optimizer.

I took the functions:

def obj(x):
    return -1*((x[0]*x[0])+(x[1]*x[1]))

It's jacobian as :

def jacj(x):
    return [-2*x[0],-2*x[1]]

It's bounds as:

bounds=[(0,1),(0,1)]

A simple constraint-- x[0]+2*x[1]<=1:

cons2=({'type':'ineq',
         'fun':lambda x: np.array([-(x[0])-2*(x[1])+1]),
          'jac':lambda x: np.array([-1.0,-2.0])})

Now I try with initial guess x0=[.1,0.01]

res=minimize(obj,x0,method='slsqp',jac=jacj,bounds=bounds,
    constraints=cons2,options={'maxiter':100,'ftol':0.000001,'eps':1.0e-08})

When I run this I get the solution as: x[0]=1,x[1]=0 and obj=-1

But when I start with the initial guess as x0=[0.001,0.01], i get the solution as: x[0]=0,x[1]=0.5 and obj=0.25

Why is it not giving an optimal solution in the latter run? How does it work?

ayush singhal
  • 1,879
  • 2
  • 18
  • 33

1 Answers1

0

Maximizing a sum of squares function is non-convex. The solver will typically converge to a local optimum. For guaranteed optimal solutions you need to use a global solver.

Note that minimizing a sum of squares objective is easier: that is convex. In this case the solver will always (well, except for things like infeasibility and numerical issues) converge to the global optimum.

Erwin Kalvelagen
  • 15,677
  • 2
  • 14
  • 39