2

I have a linear problem to solve looking for integer numbers. I found a way to solve it using the new milp implementation in spicy. Hereafter is a demonstration code.

The problem is the following. From a vector of weights w I am looking for the integer vector x such as the dot product of x and weights is in a given range. It looks like something like this

# minimize
abs(w^T @ x - target)

And I translated this in the following to implement in milp:

# maximize
w^T @ x
# constraints
target - error <= w^T @ x <= target + error

In my specific context, several solutions may exist for x. Is there a way to get all the solutions in the given interval instead of maximizing (or minimizing) something ?

Here is the milp implementation.

import numpy as np
from scipy.optimize import milp, LinearConstraint, Bounds

# inputs
ratio_l, ratio_u = 0.2, 3.0
max_bounds = [100, 200, 2, 20, 2]
target = 380.2772 # 338.34175
lambda_parameter = 2
error = lambda_parameter * 1e-6 * target

# coefficients of the linear objective function
w = np.array([12.0, 1.007825, 14.003074, 15.994915, 22.989769], dtype=np.float64)

# the aim is to minimize
#    w^T x - target_mass

# instead I maximize
#    w^T x
# in the constraint domain
#    target - error <= w^T x <= target + error

# constraints on variables 0 and 1:
# ratio_l <= x[1] / x[0] <= ratio_u
# translation =>
#    (ratio_l - ratio_u) * x[1] <= -ratio_u * x[0] + x[1] <= 0
#    use max (x[1]) to have a constant

# linear objective function
c = w

# integrality of the decision variables
# 3 is semi-integer = within bounds or 0
integrality = 3 * np.ones_like(w)  

# Matrice A that define the constraints
A = np.array([
    # boundaries of the mass defined from lambda_parameters 
    w,
    # c[1] / c[0]  max value
    [-ratio_u, 1.0, 0., 0., 0.],
])

# b_up and b_low vectors
# b_low <= A @ x <= b_up
n_max_C = max_bounds[0]
b_up = [
    target + error,  # mass target
    0.,   # c[1] / c[0] constraints up
]
b_low = [
    target - error,  # mass target
    (ratio_l - ratio_u) * max_bounds[0],  # H_C constraints up
]

# set up linear constraints
constraints = LinearConstraint(A, b_low, b_up)

bounds = Bounds(
    lb=[0, 0, 0, 0, 0],
    ub=max_bounds,
)

results = milp(
    c=c,
    constraints=constraints,
    integrality=integrality,
    bounds=bounds,
    options=dict(),
)

print(results)

The results is this

            fun: 380.277405
        message: 'Optimization terminated successfully. (HiGHS Status 7: Optimal)'
 mip_dual_bound: 380.27643944560145
        mip_gap: 2.5390790665913637e-06
 mip_node_count: 55
         status: 0
        success: True
              x: array([19., 40.,  0.,  7.,  0.])

But it exists other possible x arrays but with an highest error. This one is the

m = np.dot(w, [19., 40.,  0.,  7.,  0.])
print(f"{'target':>10s} {'calc m':>27s} {'deviation':>27s} {'error':>12s}      match?")
print(f"{target:10.6f} {target - error:14.6f} <= {m:10.6f} <= {target + error:10.6f}"
      f" {m - target:12.6f} {error:12.6f}   -> {target - error <= m <= target + error}")
    target                      calc m                   deviation        error      match?
380.277200     380.276439 <= 380.277405 <= 380.277961     0.000205     0.000761   -> True

These two other examples work also and I wonder how I can got them without implementing a grid algorithm (like brute in scipy).

m = np.dot(w, [20., 39.,  1.,  4.,  1.])
print(f"{'target':>10s} {'calc m':>27s} {'deviation':>27s} {'error':>12s}      match?")
print(f"{target:10.6f} {target - error:14.6f} <= {m:10.6f} <= {target + error:10.6f}"
      f" {m - target:12.6f} {error:12.6f}   -> {target - error <= m <= target + error}")
    target                      calc m                   deviation        error      match?
380.277200     380.276439 <= 380.277678 <= 380.277961     0.000478     0.000761   -> True
m = np.dot(w, [21., 38.,  2.,  1.,  2.])
print(f"{'target':>10s} {'calc m':>27s} {'deviation':>27s} {'error':>12s}      match?")
print(f"{target:10.6f} {target - error:14.6f} <= {m:10.6f} <= {target + error:10.6f}"
      f" {m - target:12.6f} {error:12.6f}   -> {target - error <= m <= target + error}")
    target                      calc m                   deviation        error      match?
380.277200     380.276439 <= 380.277951 <= 380.277961     0.000751     0.000761   -> True
Ger
  • 9,076
  • 10
  • 37
  • 48
  • Sounds like you want to enumerate all optimal integer solutions. scipy's milp interfaces the MILP Solver Highs, which, AFAIK, doesn't support counting/enumerating yet. If you're open to other python packages for solving your problem, I'll post an answer later. PS: @Reinderien Minimizing the absolute value of an linear function can be solved as LP, after reformulating the problem. – joni Jan 18 '23 at 13:26
  • @joni well I'll be darned. This has a good explanation - https://math.stackexchange.com/a/1955013/54983 – Reinderien Jan 18 '23 at 13:44
  • Thanks. @joni, yes I am open to other packages if needed. Currently, I solve it by building a list of integer, including various constraints and then I look for solutions iteratively. By LP you mean Linear Programming such as in the example of Reinderien ? – Ger Jan 19 '23 at 08:54
  • @Ger Yes, that's what LP stands for, though I do not think that LP is well-applied to this problem – Reinderien Jan 22 '23 at 17:28

1 Answers1

1

Linear programming is for optimization, which is generally to pick the single best solution in a solution space. Your problem cannot pick a single solution. Since you have well-bounded integral variables, "brute force" (though not iterative brute force) is quite practical. This looks like:

  • Within the known bounds of x, attempt all values for all dimensions excluding the largest one (x1, which can range between 0 and 200)
  • Calculate bounds arrays based on your "ratio" constraint
  • Calculate bounds arrays based on the "error from target" constraint
  • Combine the two to find overall bounds
  • Filter on integral solutions that are within the bounds
import numpy as np

ratio_l, ratio_u = 0.2, 3.0
max_bounds = (100, 200, 2, 20, 2)
target = 380.2772
lambda_parameter = 2
error = 1e-6 * lambda_parameter * target
w = np.array((12.0, 1.007825, 14.003074, 15.994915, 22.989769))
i0234 = [0, 2, 3, 4]
w0234 = w[i0234]

x0234 = np.stack(np.meshgrid(
    *(np.arange(1+max_bounds[m]) for m in i0234)
)).reshape((4, -1))
x0, x2, x3, x4 = x0234

x1_ratio_lower, x1_ratio_upper = np.multiply.outer((ratio_l, ratio_u), x0)
x1_target_lower, x1_target_upper = (target - np.add.outer((error, -error), w0234@x0234))/w[1]

x1_lower = np.ceil(np.max((x1_ratio_lower, x1_target_lower), axis=0)).astype(int)
x1_upper = np.floor(np.min((x1_ratio_upper, x1_target_upper), axis=0)).astype(int)
ok, = (x1_upper >= x1_lower).nonzero()

for i in ok:
    xi = x0234[:, i]
    for x1 in range(x1_lower[i], x1_upper[i]+1):
        x = [xi[0], x1, *xi[1:]]
        target_approx = w.dot(x)
        error_approx = target_approx - target
        print(f'x={x} w@x={target_approx:.6f} ~ {target}, '
              f'error={error_approx:.2e}<{error:.2e}')
x=[19, 40, 0, 7, 0] w@x=380.277405 ~ 380.2772, error=2.05e-04<7.61e-04
x=[20, 39, 1, 4, 1] w@x=380.277678 ~ 380.2772, error=4.78e-04<7.61e-04
x=[21, 38, 2, 1, 2] w@x=380.277951 ~ 380.2772, error=7.51e-04<7.61e-04

A simple boundary contraction reduces the search space:

import numpy as np

ratio_l, ratio_u = 0.2, 3.0
max_bounds = np.array((100, 200, 2, 20, 2))
target = 380.2772
lambda_parameter = 2
error = 1e-6 * lambda_parameter * target
w = np.array((12.0, 1.007825, 14.003074, 15.994915, 22.989769))

x0u = (target + error) / (w[0] + w[1] * ratio_l)
assert np.isclose(target, w.dot((x0u, x0u * ratio_l, 0, 0, 0)) - error)
max_bounds[0] = min(max_bounds[0], np.floor(x0u))

i0234 = [0, 2, 3, 4]
w0234 = w[i0234]

x0234 = np.stack(np.meshgrid(
    *(np.arange(1+max_bounds[m]) for m in i0234)
)).reshape((4, -1))
x0, x2, x3, x4 = x0234

x1_ratio_lower, x1_ratio_upper = np.multiply.outer((ratio_l, ratio_u), x0)
x1_target_lower, x1_target_upper = (target - np.add.outer((error, -error), w0234@x0234))/w[1]

x1_lower = np.ceil(np.max((x1_ratio_lower, x1_target_lower), axis=0)).astype(int)
x1_upper = np.floor(np.min((x1_ratio_upper, x1_target_upper), axis=0)).astype(int)
ok, = (x1_upper >= x1_lower).nonzero()

for i in ok:
    xi = x0234[:, i]
    for x1 in range(x1_lower[i], x1_upper[i]+1):
        x = [xi[0], x1, *xi[1:]]
        target_approx = w.dot(x)
        error_approx = target_approx - target
        print(f'x={x} w@x={target_approx:.6f} ~ {target}, '
              f'error={error_approx:.2e}<{error:.2e}')
Reinderien
  • 11,755
  • 5
  • 49
  • 77