3

I have the following optimization problem:

maximize(x1+x2+2x3+x4)

subject to x1+x2+x3+x4 = 2

where xi = {0,1} (binary variables).

When I implement this in Pulp as:

from pulp import *
x1 = pulp.LpVariable('x1', cat=LpBinary)
x2 = pulp.LpVariable('x2', cat=LpBinary)
x3 = pulp.LpVariable('x3', cat=LpBinary)
x4 = pulp.LpVariable('x4', cat=LpBinary)

prob = pulp.LpProblem('x1+x2+2*x3+x4', pulp.LpMaximize)
prob += lpSum([x1, x2, 2*x3, x4])
prob += lpSum([x1, x2, x3, x4]) == 2
prob.solve()

I get the solution of x1, x2, x3 and x4. However, I would like to disallow this solution, adding another constraint as x1+x3 < 2 in Pulp as:

prob += lpSum([x1, x3]) < 2

But I don't get a good solution due to the fact that I have a dummy variable in the prob.solve(). Should I use another constraint or I am doing something wrong?

josliber
  • 43,891
  • 12
  • 98
  • 133
RobinHood
  • 377
  • 4
  • 20

2 Answers2

3

When you run prob += lpSum([x1, x3]) < 2, you get the following error:

Traceback (most recent call last): File "xyz.py", line 13, in prob += lpSum([x1, x3]) < 2 TypeError: '<' not supported between instances of 'LpAffineExpression' and 'int'

This is the pulp package telling you that you cannot add strict inequalities, which is a standard requirement in optimization solvers. Since all your variables are binary valued, <2 is the same as <=1, and making this change will make the solver happy:

prob += lpSum([x1, x3]) <= 1
prob.solve()
print(x1.value(), x2.value(), x3.value(), x4.value())
# 0.0 1.0 1.0 0.0

So now you get a different solution with the same objective value of 3 (x2=1 and x3=1).

If you had wanted to output all possible solutions, you could use a loop to keep adding constraints disallowing the optimal solution until the optimal objective value changes:

from pulp import *
x1 = pulp.LpVariable('x1', cat=LpBinary)
x2 = pulp.LpVariable('x2', cat=LpBinary)
x3 = pulp.LpVariable('x3', cat=LpBinary)
x4 = pulp.LpVariable('x4', cat=LpBinary)

prob = pulp.LpProblem('x1+x2+2*x3+x4', pulp.LpMaximize)
prob += lpSum([x1, x2, 2*x3, x4])
prob += lpSum([x1, x2, x3, x4]) == 2
prob.solve()
print(x1.value(), x2.value(), x3.value(), x4.value())

opt = prob.objective.value()
while True:
    prob += lpSum([x.value() * x for x in [x1, x2, x3, x4]]) <= 1 + 1e-6
    prob.solve()
    if prob.objective.value() >= opt - 1e-6:
        print(x1.value(), x2.value(), x3.value(), x4.value())
    else:
        break  # No more optimal solutions  

This yields the expected output:

# 1.0 0.0 1.0 0.0
# 0.0 1.0 1.0 0.0
# 0.0 0.0 1.0 1.0
josliber
  • 43,891
  • 12
  • 98
  • 133
  • To get all solutions you need to change the constraint you have added while True: prob += lpSum([x for x in [x1, x2, x3, x4] if x.value() >= 0.99]) <= 1 prob.solve() if prob.objective.value() >= opt - 1e-6: print(x1.value(), x2.value(), x3.value(), x4.value()) else: break # No more optimal solutions – Stuart Mitchell May 09 '18 at 22:17
  • @StuartMitchell The two approaches are identical and just represent different python code for the same constraint (well, yours is more general because you're not assuming they sum to 2 and hence the constraint needs to have `<=1`, but in this model we know they sum to 2). – josliber May 10 '18 at 01:35
  • @josliber not really as your solution also can have problems with floating point arithmetic as pulp often returns 0.9999 for 1 and 0.00001 for 0 – Stuart Mitchell May 11 '18 at 01:58
  • @StuartMitchell fair enough -- I went ahead and added 1e-6 to the RHS, which should clear up those sorts of concerns. – josliber May 11 '18 at 03:02
2

To find ALL solutions you need the following modifications

while True:
    prob += lpSum([x for x in [x1, x2, x3, x4] if x.value() >= 0.99]) <= sum([x.value() for x in [x1, x2, x3, x4]]) -1
    prob.solve()
    if prob.objective.value() >= opt - 1e-6:
        print(x1.value(), x2.value(), x3.value(), x4.value())
    else:
        break  # No more optimal solutions
Stuart Mitchell
  • 1,060
  • 6
  • 7