4

I have a linear optimization problem, which can be expressed in a cost function code like this:

value_to_minimize = 0.0;
for i in range(0, len(v_1)):
    value_to_minimize += np.abs(v_1[i] - (v_2[i] * c1 + v_3[i] * c2 + v_4[i] * c3));

The task of the solver should be to find values for the variables c1, c2, c3 which minimize the value. As boundary conditions, c1, c2, c3 together should result in 1.0 and not be negative. v_1, v_2, v_3 and v_4 are vectors with 10000 float values.

Here is the outline to solve this minimization problem in cvxpy, but without the parameter pass in cp.Minimize(...):

V1 = np.array(v_1).reshape(10000, 1)
V2 = np.array(v_2 + v_3 + v_4).reshape(10000, 3)
c = cp.Variable((3,1),nonneg=True)

prob = cp.Problem(cp.Minimize(..., # ???
                [sum(c) == 1])) 
prob.solve(verbose=True)

How would the minimize function for cvxpy look in that case?

MerklT
  • 803
  • 7
  • 18
  • 1
    (1) what values are wrong and in what way? (2) make sure things can be reproduced. – Erwin Kalvelagen May 21 '20 at 11:19
  • @ErwinKalvelagen Thanks for your reply. The three values in the `c` Variable do not contain the optimal values, because I can find better values with trial and error. Reproducibility is not so easy to achieve in this case, an estimation whether `cp.pnorm(u - cp.sum(V@c), 1))` really does what I want would help me. – MerklT May 22 '20 at 09:39
  • Without a small piece of code that reproduces the problem (https://stackoverflow.com/help/minimal-reproducible-example), I am not sure how I can help. – Erwin Kalvelagen May 22 '20 at 10:20
  • @ErwinKalvelagen Thank you very much. I have now edited the question to make it more open. – MerklT May 23 '20 at 15:33
  • BTW. Why do you want to use this particular library? This seems like a relatively simple linear optimization problem. – rpoleski May 23 '20 at 16:45

2 Answers2

5

If you don't mind using another library, I would recommend scipy for this one:

from scipy.optimize import minimize
import numpy as np

def OF(x0, v_1, v_2, v_3, v_4):
  value_to_minimize = 0.0
  for i in range(0, len(v_1)):
    value_to_minimize += np.abs(v_1[i] - (v_2[i] * x0[0] + v_3[i] * x0[1] + v_4[i] * x0[2]))
  return value_to_minimize


if __name__ == '__main__':

  x0 = np.array([0, 0, 0])
  v_1 = np.random.randint(10, size = 10000)
  v_2 = np.random.randint(10, size = 10000)
  v_3 = np.random.randint(10, size = 10000)
  v_4 = np.random.randint(10, size = 10000)


  minx0 = np.repeat(0, [len(x0)] , axis = 0)
  maxx0 = np.repeat(np.inf, [len(x0)] , axis = 0)
  bounds = tuple(zip(minx0, maxx0))

  cons = {'type':'eq', 
  'fun':lambda x0: 1 - sum(x0)}
  res_cons = minimize(OF, x0, (v_1, v_2, v_3, v_4), bounds = bounds, constraints=cons, method='SLSQP')



  print(res_cons)
  print('Current value of objective function: ' + str(res_cons['fun']))
  print('Current value of controls:')
  print(res_cons['x'])

Output is:

     fun: 27919.666908810435
     jac: array([5092.        , 5672.        , 5108.39868164])
 message: 'Optimization terminated successfully.'
    nfev: 126
     nit: 21
    njev: 21
  status: 0
 success: True
       x: array([0.33333287, 0.33333368, 0.33333345])
Current value of objective function: 27919.666908810435
Current value of controls:
[0.33333287 0.33333368 0.33333345]

But obviously the actual values here do not mean much since I just used random integers for the v_ values... just a demo that this model would meet your constraint of c values adding to 1 and boundary of not less than zero (negative).

edit update: did not look at the OF/constraints closely enough to realize this was a linear problem... should be using a linear solver algorithm (though, you can use a nonlinear, it's overkill though).

scipy's linear solvers are not great for complex optimization problems like this one, reverting back to cvxpy :

import numpy as np
import cvxpy as cp

# Create two scalar optimization variables.
x = cp.Variable()
y = cp.Variable()
z = cp.Variable()

v_1 = np.random.randint(10, size = 10000)
v_2 = np.random.randint(10, size = 10000)
v_3 = np.random.randint(10, size = 10000)
v_4 = np.random.randint(10, size = 10000)

constraints = [x + y + z == 1, x >= 0, y >= 0, z >= 0]

objective = cp.Minimize(cp.sum(cp.abs(v_1 - (v_2 * x + v_3 * y + v_4 * z))))

prob = cp.Problem(objective, constraints)
print("Value of OF:", prob.solve())
print('Current value of controls:')
print(x.value, y.value, z.value)

output:

Value of OF: 27621.999978414093
Current value of controls:
0.3333333333016109 0.33333333406414983 0.3333333326298208
Anna Nevison
  • 2,709
  • 6
  • 21
  • 1
    Many thanks for the solution, which works perfectly. However, I have seen that the SLSQP is a non-linear solver, whereas my problem has a linear state space. Could I be criticized for this in a scientific paper? – MerklT May 25 '20 at 11:13
  • 1
    @MerklT ah I didn't read your function, yea you should be using a linear solver for a linear problem.. you wouldn't be criticized for SLSQP.. but you might get some looks since its kinda overkill. I posted an update ! – Anna Nevison May 26 '20 at 03:41
0

I strongly recommend removing one of the parameters and a constraint. If you know that c1 + c2 + c3 = 1., then use c3 = 1. - c1 - c2! This makes the task of minimizer much easier. Also if v_1 etc. are numpy arrays, then use them as arrays, e.g.,

c3 = 1. - c1 - c2
value_to_minimize = np.sum(np.abs(v_1 - (v_2 * c1 + v_3 * c2 + v_4 * c3)))
rpoleski
  • 988
  • 5
  • 12
  • The simplification of c3 seems logical, thank you for that. How would you solve the minimization problem? With another library? – MerklT May 24 '20 at 08:18