0

I'm trying to add a series of constraints which make use of an indicator function, but it seems to be breaking the solver.

This is the original formulation of the constraint:

enter image description here

Which has to be broken down into a form suitable for Pyscipopt:

enter image description here

I'm under the impression this is the big-M method of optimization, and seems like it should work in theory. However, Pyscipopt does not seem to be able to solve it, returning an error:

enter image description here

Code Extract (Problematic section is right at the bottom.)

# Create Variables
a, b, x, y = {}, {}, {}, {}
BIG_NUM = 1e15

for i in range(num_bonds):
    x[i] = model.addVar(lb = 0, ub = None, vtype="C", name=f"x{i}")
    a[i] = model.addVar(lb = 0, ub = None, vtype="C", name=f"a{i}")
    b[i] = model.addVar(lb = 0, ub = None, vtype="C", name=f"b{i}")
    
for t in range(num_periods):
    y[t] = model.addVar(lb = 0, vtype="C", name=f"y{t}")

for i in range(num_bonds):
    # Buy Trades
    model.addCons( a[i] >= p_ask[i] * (x[i] - x_old[i]) )

    # Sell Trades
    model.addCons( b[i] >= p_bid[i] * (x_old[i] - x[i]) )

# Problematic Section
INDICATOR = model.addVar(vtype="B", name=f"INDICATOR")
model.addCons( quicksum(a[i] for i in range(num_bonds)) + BIG_NUM*INDICATOR <= quicksum(b[i] for i in range(num_bonds)) + BIG_NUM )
model.addCons( INDICATOR * quicksum(a[i] for i in range(num_bonds)) <= turnover_max * quicksum(p_bid[i] * x_old[i] for i in range(num_bonds)) ) 
model.addCons( quicksum(b[i] for i in range(num_bonds)) <= turnover_max * quicksum(p_bid[i] * x_old[i] for i in range(num_bonds)) + \
                                                           INDICATOR * quicksum(b[i] for i in range(num_bonds)) ) 

Spent absolute days on this, would greatly appreciate any help, thanks.

EDIT:

Interestingly enough, the enabling 2 of the 3 constraints work, i.e.:

# Problematic Section
INDICATOR = model.addVar(vtype="B", name=f"INDICATOR")
model.addCons( quicksum(a[i] for i in range(num_bonds)) + BIG_NUM*INDICATOR <= quicksum(b[i] for i in range(num_bonds)) + BIG_NUM ) # (1) This can be enabled.
model.addCons( INDICATOR * quicksum(a[i] for i in range(num_bonds)) <= turnover_max * quicksum(p_bid[i] * x_old[i] for i in range(num_bonds)) ) # (2) This can be enabled.
model.addCons( quicksum(b[i] for i in range(num_bonds)) <= turnover_max * quicksum(p_bid[i] * x_old[i] for i in range(num_bonds)) + \
                                                           INDICATOR * quicksum(b[i] for i in range(num_bonds)) ) # (3) If this is enabled, the code breaks.

HOWEVER should (1) be enabled (i.e. not commented out), the results returned show that all the other constraints not shown in the code is being exceeded, which is strange since the solver is saying it's an optimal solution.

  • `BIG_NUM = 1e15` is insane. Computers work with limited precision and solvers contain quite a few tolerances. – Erwin Kalvelagen May 10 '22 at 15:26
  • Thanks for pointing that out, I was following examples of big-M and thought it just had to be a huge number. For context, the numbers involved here go up to 1e9. I tried to reduce the BIG_NUM, but what happened was that the solver ended up running for hours (so, improvement?) without ever solving it, unfortunately. – Winsor-Mavis May 11 '22 at 00:43
  • Do you have a reference for an example with M=1e15? Such a document would show a total lack of knowledge. – Erwin Kalvelagen May 11 '22 at 13:55
  • I can't find the exact example, but this particular piece suggests for M to be at least 100 times the value of the variable in the constraints. In my case, the variables when solved are around 1e9 - so yes 1e15 is over the top. However, I tried setting it to 1e10 and it looked like even then the solver could not solve it after hours. I've no idea if I set the constraints up correctly, however. https://www.researchgate.net/post/In_the_Big-M_Method_Linear_Programming_how_big_should_M_Be – Winsor-Mavis May 12 '22 at 03:19
  • BigM's can fail even for values as small as 1e4. 1e10 is crazy. Anyway, your model is poorly scaled (wrong units), so you should expect lots of problems. – Erwin Kalvelagen May 12 '22 at 03:29
  • That's good to know - could you provide any guidance on how would I go around scaling the model? For example, should I be trying to factorise the large numbers e.g. 5.58e7 into 0.558 * 1e8, such that the variables are transformed into small numbers multiplied by a large constant? – Winsor-Mavis May 12 '22 at 16:10

0 Answers0