0

I would like to get an optimal solution for following equation set:

x_w * 1010 + x_m * d_m = 1017

x_w + x_m = 1

my code is as follows:

from scipy.optimize import minimize
import numpy as np

def f1(p):
    x_w, x_m, d_m = p
    return (x_w*1010 + x_m*d_m) - 1017.7

def f2(p):
    x_w, x_m, d_m = p
    return x_w + x_m - 1

bounds =[(0,1), (0,1), (1000, 10000)]

x0 = np.array([0.5, 0.5, 1500])

res = minimize(lambda p: f1(p)+f2(p), x0=x0, bounds=bounds)

However, all I get back (res.x) are the initial values (x0).

How do I make it work? Is there a better approach? There are just these two equations for the three variables.

joni
  • 6,840
  • 2
  • 13
  • 20
jozi
  • 15
  • 1
  • 6

1 Answers1

0

In general, you can't solve the equation system by minimizing f1(p) + f2(p) since the minimum of this objective is no solution of the equation system. However, you have to minimize the sum of squared errors of each equation, i.e. you minimize f1(p)**2 + f2(p)**2:

minimize(lambda p: f1(p)**2 + f2(p)**2, x0=x0, bounds=bounds)

Alternatively, you could use scipy.optimize.fsolve which doesn't support bounds, unfortunately.

joni
  • 6,840
  • 2
  • 13
  • 20