Here is some prototype-code to show you how, that your problem is quite hard (regarding formulation and performance; the former is visible in the code).
The implementation uses cvxpy for modelling (convex-programming only) and is based on the mixed-integer approach.
Code
import numpy as np
from cvxpy import *
"""
x0 == 0 -> f(x) = 0
x0 == 1 -> f(x) = 1
...
x1 == 0 -> f(x) = 1
x1 == 1 -> f(x) = 4
...
"""
rate_table = np.array([[0, 1, 3, 5], [1, 4, 5, 6], [1.3, 1.7, 2.25, 3.0]])
bounds_x = (0, 3) # inclusive; bounds are needed for linearization!
# Vars
# ----
n_vars = len(rate_table)
n_values_per_var = [len(x) for x in rate_table]
I = Bool(n_vars, n_values_per_var[0]) # simplified assumption: rate-table sizes equal
X = Int(n_vars)
X_ = Variable(n_vars, n_values_per_var[0]) # X_ = mul_elemwise(I*X) broadcasted
# Constraints
# -----------
constraints = []
# X is bounded
constraints.append(X >= bounds_x[0])
constraints.append(X <= bounds_x[1])
# only one value in rate-table active (often formulated with SOS-type-1 constraints)
for i in range(n_vars):
constraints.append(sum_entries(I[i, :]) <= 1)
# linearization of product of BIN * INT (INT needs to be bounded!)
# based on Erwin's answer here:
# https://www.or-exchange.org/questions/10775/how-to-linearize-product-of-binary-integer-and-integer-variables
for i in range(n_values_per_var[0]):
constraints.append(bounds_x[0] * I[:, i] <= X_[:, i])
constraints.append(X_[:, i] <= bounds_x[1] * I[:, i])
constraints.append(X - bounds_x[1]*(1-I[:, i]) <= X_[:, i])
constraints.append(X_[:, i] <= X - bounds_x[0]*(1-I[:, i]))
# Fix chosings -> if table-entry x used -> integer needs to be x
# assumptions:
# - table defined for each int
help_vec = np.arange(n_values_per_var[0])
constraints.append(I * help_vec == X)
# ONLY FOR DEBUGGING -> make simple max each X solution infeasible
constraints.append(sum_entries(mul_elemwise([1, 3, 2], square(X))) <= 15)
# Objective
# ---------
objective = Maximize(sum_entries(mul_elemwise(rate_table, X_)))
# Problem & Solve
# ---------------
problem = Problem(objective, constraints)
problem.solve() # choose other solver if needed, e.g. commercial ones like Gurobi, Cplex
print('Max-objective: ', problem.value)
print('X:\n' + str(X.value))
Output
('Max-objective: ', 20.70000000000001)
X:
[[ 3.]
[ 1.]
[ 1.]]
Idea
- Transform the objective
max: x0*f(x0) + x1*f(x1) + ...
- into:
x0*f(x0==0) + x0*f(x0==1) + ... + x1*f(x1==0) + x1*f(x1==1)+ ...
- Introduce binary-variables to formulate:
f(x0==0) as I[0,0]*table[0,0]
f(x1==2) as I[1,2]*table[0,2]
- Add constraints to limit the above
I
to have one nonzero entry only for each variable x_i
(only one of the expanded objective-components will be active)
- Linearize the product
x0*f(x0==0) == x0*I[0,0]*table(0,0)
(integer * binary * constant)
- Fix the table-lookup: using table-entry with index x (of x0) should result in
x0 == x
- assuming, that there are no gaps in the table, this can be done formulated as
I * help_vec == X)
where help_vec == vector(lower_bound, ..., upper_bound)
cvxpy is automatically (by construction) proving, that our formulation is convex, which is needed for most solvers (and in general not easy to recognize).
Just for fun: bigger-problem and commercial-solver
Input generated by:
def gen_random_growing_table(size):
return np.cumsum(np.random.randint(1, 10, size))
SIZE = 100
VARS = 100
rate_table = np.array([gen_random_growing_table(SIZE) for v in range(VARS)])
bounds_x = (0, SIZE-1) # inclusive; bounds are needed for linearization!
...
...
constraints.append(sum_entries(square(X)) <= 150)
Output:
Explored 19484 nodes (182729 simplex iterations) in 129.83 seconds
Thread count was 4 (of 4 available processors)
Optimal solution found (tolerance 1.00e-04)
Warning: max constraint violation (1.5231e-05) exceeds tolerance
Best objective -1.594000000000e+03, best bound -1.594000000000e+03, gap 0.0%
('Max-objective: ', 1594.0000000000005)