1

This is a same question as follows, but the difference is I'm using docplex.

cplex.linear_constraints.add too slow for large models

How can I add constraints using indices with docplex?

My code is something like below.

x = lm.binary_var_dict(range(n),name="x");
xv = [ax for i,ax in x.items()];

for i in range(l):
  Bx = {xv[j]:B[i,j] for j in range(n)};
  Bx = lm.linear_expr(Bx);
  lm.add_constraint(Bx == 1);
nemy
  • 519
  • 5
  • 16
  • 1
    Please show the code your are using for docplex, otherwise it is hard to tell how it could be improved. Are you adding constraints one-by-one or in a batch? Are you sure the time is lost when adding the constraint or may the problem be creating the constraints? – Daniel Junglas Oct 09 '19 at 08:53
  • Sorry about that. Please find the code above. – nemy Oct 09 '19 at 18:46

2 Answers2

1

can you try to add constraints in batches ?

Adding constraints to the model by batches using Model.add_constraints() is usually more efficient. Try grouping constraints in lists or comprehensions (both work).

Example:

m.add_constraints((m.dotf(ys, lambda j_: i + (i+j_) % 3) >= i for i in rsize),
         ("ct_%d" % i for i in rsize))

From Writing efficient DOcplex code

Alex Fleischer
  • 9,276
  • 2
  • 12
  • 15
0

There is a number of alternate ways in which you could create your constraints. For example you can use functions sum or scal_prod. And you can batch creation or not. Here is a small test code that illustrates the different variants:

from docplex.mp.model import Model
import time

n = 1000
l = n
B = { (i, j) : i * n + j for i in range(l) for j in range(n) }
with Model() as m:
    x = m.binary_var_dict(range(n),name="x");
    xv = [ax for i,ax in x.items()];

    start = time.time()
    for i in range(l):
        Bx = {xv[j]:B[i,j] for j in range(n)};
        Bx = m.linear_expr(Bx);
        m.add_constraint(Bx == 1);
    elapsed1 = time.time() - start
print('Original: %.2f' % elapsed1)

with Model() as m:
    x = m.binary_var_dict(range(n),name="x");
    xv = [ax for i,ax in x.items()];

    start = time.time()
    m.add_constraints(m.linear_expr({xv[j]:B[i,j] for j in range(n)}) == 1 for i in range(l))
    elapsed2 = time.time() - start
print('Original batched: %.2f' % elapsed2)

with Model() as m:
    x = m.binary_var_dict(range(n),name="x");
    xv = [ax for i,ax in x.items()];

    start = time.time()
    for i in range(l):
        m.add_constraint(m.sum(B[i,j] * xv[j] for j in range(n)) == 1)
    elapsed3 = time.time() - start
print('Sum: %.2f' % elapsed3)

with Model() as m:
    x = m.binary_var_dict(range(n),name="x");
    xv = [ax for i,ax in x.items()];

    start = time.time()
    Bx = m.linear_expr(Bx);
    m.add_constraints(m.sum(B[i,j] * xv[j] for j in range(n)) == 1 for i in range(l))
    elapsed4 = time.time() - start
print('Sum batched: %.2f' % elapsed4)

with Model() as m:
    x = m.binary_var_dict(range(n),name="x");
    xv = [ax for i,ax in x.items()];

    start = time.time()
    for i in range(l):
        m.add_constraint(m.scal_prod([xv[j] for j in range(n)],
                                     [B[i,j] for j in range(n)]) == 1)
    elapsed5 = time.time() - start
print('scal_prod: %.2f' % elapsed5)

with Model() as m:
    x = m.binary_var_dict(range(n),name="x");
    xv = [ax for i,ax in x.items()];

    start = time.time()
    Bx = m.linear_expr(Bx);
    m.add_constraints(m.scal_prod([xv[j] for j in range(n)],
                                  [B[i,j]for j in range(n)]) == 1 for i in range(l))
    elapsed6 = time.time() - start
print('scal_prod batched: %.2f' % elapsed6)

On my box this gives

Original: 1.86
Original batched: 1.82
Sum: 2.84
Sum batched: 2.81
scal_prod: 1.55
scal_prod batched: 1.50

So batching does not buy too much but scal_prod is faster than linear_expr

Daniel Junglas
  • 5,830
  • 1
  • 5
  • 22