-1

Trying to optimize using scipy.optimize.linprog a cost function, where the cost coefficients are function of the variables; e.g.

Cost = c1 * x1 + c2 * x2 # (x1,x2 are the variables)

for example

if x1 = 1, c1 = 0.5

if x1 = 2, c1 = 1.25

etc.

* Just to clarify *

we are looking for a minimum cost of variables; xi; i=1,2,3,... xi are positive integers.

however, the cost coefficient per xi, is a function of the value of xi. cost is x1*f1(x1) + x2*f2(x2) + ... + c0

fi - is a "rate" table; e.g. - f1(0) = 0; f1(1) = 2.00; f1(2) = 3.00, etc.

the xi are under constrains, and they can't be negative and can't be over qi =>

0 <= xi <= qi 

fi() values are calculated for each possible value of xi

I hope it clarifies the model.

PV8
  • 5,799
  • 7
  • 43
  • 87
  • What's the question? – grael Aug 22 '16 at 12:25
  • 2
    That's not a linear program. You need quadratic programming or nonlinear solver. – percusse Aug 22 '16 at 13:01
  • thank you. any suggestion/link for sample code? – user2042448 Aug 22 '16 at 16:52
  • This all depends on the exact model (you are only giving a partial example). The possibile approaches i see are Mixed-integer programming (which can get messy; the easiest approach would be a piecewise-approximation) or Nonlinear-programming. Both are hard in general. And there is no solver for both within scipy. Going the MIP-way, you would need some clever formulation and could then use pulp-or (which includes cbc as MIP-solver). Going the NL-way, you should something which allows the usage of ipopt (pyomo, pyipopt). – sascha Aug 22 '16 at 18:59

1 Answers1

1

Here is some prototype-code to show you how, that your problem is quite hard (regarding formulation and performance; the former is visible in the code).

The implementation uses cvxpy for modelling (convex-programming only) and is based on the mixed-integer approach.

Code

    import numpy as np
    from cvxpy import *

    """
    x0 == 0 -> f(x) = 0
    x0 == 1 -> f(x) = 1
    ...
    x1 == 0 -> f(x) = 1
    x1 == 1 -> f(x) = 4
    ...
    """
    rate_table = np.array([[0, 1, 3, 5], [1, 4, 5, 6], [1.3, 1.7, 2.25, 3.0]])
    bounds_x = (0, 3)  # inclusive; bounds are needed for linearization!

    # Vars
    # ----
    n_vars = len(rate_table)
    n_values_per_var = [len(x) for x in rate_table]

    I = Bool(n_vars, n_values_per_var[0])  # simplified assumption: rate-table sizes equal
    X = Int(n_vars)
    X_ = Variable(n_vars, n_values_per_var[0])  # X_ = mul_elemwise(I*X) broadcasted

    # Constraints
    # -----------
    constraints = []

    # X is bounded
    constraints.append(X >= bounds_x[0])
    constraints.append(X <= bounds_x[1])

    # only one value in rate-table active (often formulated with SOS-type-1 constraints)
    for i in range(n_vars):
        constraints.append(sum_entries(I[i, :]) <= 1)

    # linearization of product of BIN * INT (INT needs to be bounded!)
    # based on Erwin's answer here:
    # https://www.or-exchange.org/questions/10775/how-to-linearize-product-of-binary-integer-and-integer-variables
    for i in range(n_values_per_var[0]):
        constraints.append(bounds_x[0] * I[:, i] <= X_[:, i])
        constraints.append(X_[:, i] <= bounds_x[1] * I[:, i])
        constraints.append(X - bounds_x[1]*(1-I[:, i]) <= X_[:, i])
        constraints.append(X_[:, i] <= X - bounds_x[0]*(1-I[:, i]))

    # Fix chosings -> if table-entry x used -> integer needs to be x
    # assumptions:
    # - table defined for each int
    help_vec = np.arange(n_values_per_var[0])
    constraints.append(I * help_vec == X)

    # ONLY FOR DEBUGGING -> make simple max each X solution infeasible
    constraints.append(sum_entries(mul_elemwise([1, 3, 2], square(X))) <= 15)

    # Objective
    # ---------
    objective = Maximize(sum_entries(mul_elemwise(rate_table, X_)))

    # Problem & Solve
    # ---------------
    problem = Problem(objective, constraints)
    problem.solve()  # choose other solver if needed, e.g. commercial ones like Gurobi, Cplex
    print('Max-objective: ', problem.value)
    print('X:\n' + str(X.value))

Output

('Max-objective: ', 20.70000000000001)
X:
[[ 3.]
 [ 1.]
 [ 1.]]

Idea

  • Transform the objective max: x0*f(x0) + x1*f(x1) + ...
    • into: x0*f(x0==0) + x0*f(x0==1) + ... + x1*f(x1==0) + x1*f(x1==1)+ ...
  • Introduce binary-variables to formulate:
    • f(x0==0) as I[0,0]*table[0,0]
    • f(x1==2) as I[1,2]*table[0,2]
  • Add constraints to limit the above I to have one nonzero entry only for each variable x_i (only one of the expanded objective-components will be active)
  • Linearize the product x0*f(x0==0) == x0*I[0,0]*table(0,0) (integer * binary * constant)
  • Fix the table-lookup: using table-entry with index x (of x0) should result in x0 == x
    • assuming, that there are no gaps in the table, this can be done formulated as I * help_vec == X) where help_vec == vector(lower_bound, ..., upper_bound)

cvxpy is automatically (by construction) proving, that our formulation is convex, which is needed for most solvers (and in general not easy to recognize).

Just for fun: bigger-problem and commercial-solver

Input generated by:

def gen_random_growing_table(size):
    return np.cumsum(np.random.randint(1, 10, size))
SIZE = 100
VARS = 100
rate_table = np.array([gen_random_growing_table(SIZE) for v in range(VARS)])
bounds_x = (0, SIZE-1)  # inclusive; bounds are needed for linearization!
...
...
constraints.append(sum_entries(square(X)) <= 150)

Output:

Explored 19484 nodes (182729 simplex iterations) in 129.83 seconds
Thread count was 4 (of 4 available processors)

Optimal solution found (tolerance 1.00e-04)
Warning: max constraint violation (1.5231e-05) exceeds tolerance
Best objective -1.594000000000e+03, best bound -1.594000000000e+03, gap 0.0%
('Max-objective: ', 1594.0000000000005)
sascha
  • 32,238
  • 6
  • 68
  • 110