I have a linear problem to solve looking for integer numbers. I found a way to solve it using the new milp implementation in spicy. Hereafter is a demonstration code.
The problem is the following. From a vector of weights w
I am looking for the integer vector x such as the dot product of x and weights is in a given range. It looks like something like this
# minimize
abs(w^T @ x - target)
And I translated this in the following to implement in milp:
# maximize
w^T @ x
# constraints
target - error <= w^T @ x <= target + error
In my specific context, several solutions may exist for x. Is there a way to get all the solutions in the given interval instead of maximizing (or minimizing) something ?
Here is the milp implementation.
import numpy as np
from scipy.optimize import milp, LinearConstraint, Bounds
# inputs
ratio_l, ratio_u = 0.2, 3.0
max_bounds = [100, 200, 2, 20, 2]
target = 380.2772 # 338.34175
lambda_parameter = 2
error = lambda_parameter * 1e-6 * target
# coefficients of the linear objective function
w = np.array([12.0, 1.007825, 14.003074, 15.994915, 22.989769], dtype=np.float64)
# the aim is to minimize
# w^T x - target_mass
# instead I maximize
# w^T x
# in the constraint domain
# target - error <= w^T x <= target + error
# constraints on variables 0 and 1:
# ratio_l <= x[1] / x[0] <= ratio_u
# translation =>
# (ratio_l - ratio_u) * x[1] <= -ratio_u * x[0] + x[1] <= 0
# use max (x[1]) to have a constant
# linear objective function
c = w
# integrality of the decision variables
# 3 is semi-integer = within bounds or 0
integrality = 3 * np.ones_like(w)
# Matrice A that define the constraints
A = np.array([
# boundaries of the mass defined from lambda_parameters
w,
# c[1] / c[0] max value
[-ratio_u, 1.0, 0., 0., 0.],
])
# b_up and b_low vectors
# b_low <= A @ x <= b_up
n_max_C = max_bounds[0]
b_up = [
target + error, # mass target
0., # c[1] / c[0] constraints up
]
b_low = [
target - error, # mass target
(ratio_l - ratio_u) * max_bounds[0], # H_C constraints up
]
# set up linear constraints
constraints = LinearConstraint(A, b_low, b_up)
bounds = Bounds(
lb=[0, 0, 0, 0, 0],
ub=max_bounds,
)
results = milp(
c=c,
constraints=constraints,
integrality=integrality,
bounds=bounds,
options=dict(),
)
print(results)
The results is this
fun: 380.277405
message: 'Optimization terminated successfully. (HiGHS Status 7: Optimal)'
mip_dual_bound: 380.27643944560145
mip_gap: 2.5390790665913637e-06
mip_node_count: 55
status: 0
success: True
x: array([19., 40., 0., 7., 0.])
But it exists other possible x arrays but with an highest error. This one is the
m = np.dot(w, [19., 40., 0., 7., 0.])
print(f"{'target':>10s} {'calc m':>27s} {'deviation':>27s} {'error':>12s} match?")
print(f"{target:10.6f} {target - error:14.6f} <= {m:10.6f} <= {target + error:10.6f}"
f" {m - target:12.6f} {error:12.6f} -> {target - error <= m <= target + error}")
target calc m deviation error match?
380.277200 380.276439 <= 380.277405 <= 380.277961 0.000205 0.000761 -> True
These two other examples work also and I wonder how I can got them without implementing a grid algorithm (like brute in scipy).
m = np.dot(w, [20., 39., 1., 4., 1.])
print(f"{'target':>10s} {'calc m':>27s} {'deviation':>27s} {'error':>12s} match?")
print(f"{target:10.6f} {target - error:14.6f} <= {m:10.6f} <= {target + error:10.6f}"
f" {m - target:12.6f} {error:12.6f} -> {target - error <= m <= target + error}")
target calc m deviation error match?
380.277200 380.276439 <= 380.277678 <= 380.277961 0.000478 0.000761 -> True
m = np.dot(w, [21., 38., 2., 1., 2.])
print(f"{'target':>10s} {'calc m':>27s} {'deviation':>27s} {'error':>12s} match?")
print(f"{target:10.6f} {target - error:14.6f} <= {m:10.6f} <= {target + error:10.6f}"
f" {m - target:12.6f} {error:12.6f} -> {target - error <= m <= target + error}")
target calc m deviation error match?
380.277200 380.276439 <= 380.277951 <= 380.277961 0.000751 0.000761 -> True