0

I am using scipy.minimize for an optimization problem.

This is my code


import numpy as np
from scipy.optimize import minimize
from scipy.optimize import Bounds

#bounds = Bounds([25, 36], [26, 38],[10,27],[6,28],[0,1800],[0,800],[0,100],[25,60],[2,7])

bounds = Bounds([20,6,20,23],[35,9,50,26])

energy_history = []
x_values = []

def objective(x):
    return (-0.20859863*x[0:1] -1.5088649*x[1:2] +0.10707853*x[2:3] +1.6829923*x[3:4] -0.008870916*x[0:1]*x[1:2] + 0.0007393111*x[0:1]*x[2:3] +0.010610705*x[0:1]*x[3:4] + 0.005123541*x[1:2]*x[2:3] +  0.086458616*x[1:2]*x[3:4] -0.007695199*x[2:3]*x[3:4] + 0.00016993227*x[0:1]*x[0:1] -0.026582083*x[1:2]*x[1:2]  + 0.00014467833*x[2:3]*x[2:3] -0.051599417*x[3:4]*x[3:4] - 9.540932)

def callback(x):
    fobj = objective(x)
    x_values.append(x)
    energy_history.append(fobj)


x0 = np.array([34,8,49,25])
res = minimize(objective, x0, method='trust-constr',
               options={'verbose': 1}, bounds=bounds,callback=callback)



optimal_values= res.x

print('optimal values found: ' + str(res.x))
print('energy consumed: ' + str(res.fun))

I get an error when I run this.

Error is with the callback function, it says

TypeError: callback() takes 1 positional argument but 2 were given

Where am I going wrong?

chink
  • 1,505
  • 3
  • 28
  • 70

1 Answers1

1

According to the docs, the callback-function signature depends on the solver chosen. (Which is not very nice.)

While it's callback(x) for all others, in your case it's callback(x, status) as you are using method='trust-constr'.

Just add this additional parameter to yours (and ignore it if you don't have use for the status-information).

sascha
  • 32,238
  • 6
  • 68
  • 110
  • Thanks !! Also, I'm getting different optimum values for different initial values which is `x0` in my case. Any idea why is this happening. Tried going through docs but found nothing. – chink May 11 '19 at 11:01
  • 1
    Because all solvers there only provide local convergence. If your optimization problem is non-convex (you are multiplying variables), this is normal. There is a concept of *global solvers* but not within scipy and local-convergence vs. global-convergence on non-convex problems is simplified a P vs. NP thing. – sascha May 11 '19 at 11:02
  • Ahh!! My optimization problem is a non-convex problem. Can you suggest which algorithms/packages should I use to get good results for non-convex problems. – chink May 11 '19 at 11:09
  • or any link which I can refer? – chink May 11 '19 at 11:25
  • Those are harder to use (less evolved python-lib if at all; a lot of stuff which can go wrong: e.g. gradient calculation). The keyword global optimization will lead to papers including surveys. A popular open-source solver is [couenne](https://projects.coin-or.org/Couenne). [Pyomo](http://www.pyomo.org/) is probably the most pain-free lib to give access to it in python. – sascha May 11 '19 at 11:27