0

I have been using lpsolve in Python for a very long time, with generally good results. I have even written my own Cython wrapper to it to overcome the mess that the original Python wrapper is.

My Cython wrapper is much faster in the setup of the problem, but of course the solution time only depends on the lpsolve C code and has got nothing to do with Python.

I am only solving real-valued linear problems, no MIPs. The latest (and one of the biggest) LP I had to solve had a constraint matrix of size about 5,000 x 3,000, and the setup plus solve takes about 150 ms. The problem is, I have to solve a slightly modified version of the same problem many times in my simulation (constraints, RHS, bounds and so on are time-dependent for a simulation with many timesteps). The constraint matrix is usually very sparse, with about 0.1% - 0.5% of NNZ or less.

Using lpsolve, the setup and solve of the problem is as easy as the following:

import numpy
from lp_solve.lp_solve import PRESOLVE_COLS, PRESOLVE_ROWS, PRESOLVE_LINDEP, PRESOLVE_NONE, SCALE_DYNUPDATE, lpsolve

# The constraint_matrix id a 2D NumPy array  and it contains 
# both equality and inequality constraints
# And I need it to be single precision floating point, like all 
# other NumPy arrays from here onwards
m, n = constraint_matrix.shape

n_le = len(inequality_constraints)
n_e  = len(equality_constraints)

# Setup RHS vector
b = numpy.zeros((m, ), dtype=numpy.float32)

# Assign equality and inequality constraints (= and <=)
b[0:n_e]  = equality_constraints
b[-n_le:] = inequality_constraints

# Tell lpsolve which rows are equalities and which are inequalities
e = numpy.asarray(['LE']*m)
e[0:-n_le] = 'EQ'

# Make the LP
lp = lpsolve('make_lp', m, n)

# Some options for scaling the problem
lpsolve('set_scaling', lp, SCALE_DYNUPDATE)
lpsolve('set_verbose', lp, 'IMPORTANT')

# Use presolve as it is much faster
lpsolve('set_presolve', lp, PRESOLVE_COLS | PRESOLVE_ROWS | PRESOLVE_LINDEP)

# I only care about maximization
lpsolve('set_sense', lp, True)

# Set the objective function of the problem
lpsolve('set_obj_fn', lp, objective_function)
lpsolve('set_mat', lp, constraint_matrix)

# Tell lpsolve about the RHS
lpsolve('set_rh_vec', lp, b)

# Set the constraint type (equality or inequality)
lpsolve('set_constr_type', lp, e)

# Set upper bounds for variables - lower bounds are automatically 0
lpsolve('set_upbo', lp, ub_values)

# Solve the problem
out = lpsolve('solve', lp)

# Retrieve the solution for all the variables
vars_sol = numpy.asarray([lpsolve('get_var_primalresult', lp, i) for i in xrange(m + 1, m + n + 1)], dtype=numpy.float32)

# Delete the problem, timestep done
lpsolve('delete_lp', lp)

For reasons that are too long to explain, my NumPy arrays are all single precision floating point arrays, and I'd like them to stay that way.

Now, after all this painful introduction, I would like to ask: does anyone know of another library (with reasonable Python wrappers) that allows me to setup and solve a problem of this size as fast (or potentially faster) than lpsolve? Most of the libraries I have looked at (PuLP, CyLP, PyGLPK and so on) do not seem to have a straightforward way to say "this is my entire constraint matrix, set it in one go". They mostly seem to be oriented towards being "modelling languages" that allow fancy things like this (CyLP example):

# Add variables
x = s.addVariable('x', 3)
y = s.addVariable('y', 2)

# Create coefficients and bounds
A = np.matrix([[1., 2., 0],[1., 0, 1.]])
B = np.matrix([[1., 0, 0], [0, 0, 1.]])
D = np.matrix([[1., 2.],[0, 1]])
a = CyLPArray([5, 2.5])
b = CyLPArray([4.2, 3])
x_u= CyLPArray([2., 3.5])

# Add constraints
s += A * x <= a
s += 2 <= B * x + D * y <= b
s += y >= 0
s += 1.1 <= x[1:3] <= x_u

I honestly don't care about the flexibility, I just need raw speed in the problem setup and solve. Creating NumPy matrices plus doing all those fancy operations above is definitely going to be a performance killer.

I'd rather stay on Open Source solvers if possible, but any suggestion is most welcome. My apologies for the long post.

Andrea.

Infinity77
  • 1,317
  • 10
  • 17
  • CyLP might be a candidate. I did use a lot of more low-level numpy/scipy modelling [here](https://github.com/sschnug/kemeny_ranking/blob/master/kemeny.py) before just posting the sparse matrix to clpy (*extract:* `model += lhs * x == rhs`). Sure, this sparse format (of scipy) will need to be copied/transformed for Cbc/Clp, but that should be linear in the number of non-zeros. – sascha Apr 24 '19 at 11:09
  • @sasha: thank you for the comment and the link. Looks interesting, although the generation of the constraint matrix is quite complex in my case and going through the transformation to sparse matrix takes about the same time as setting up the linear problem (i.e., 2D NumPy array to scipy sparse takes about 70-80 ms). I'll take a look at your code in a more comprehensive way, although I am not sure it's easily adaptable to the my problem at hand... – Infinity77 Apr 24 '19 at 13:19
  • 1
    Try `linprog( method="highs-ipm" ... )` in scipy 1.6, with sparse csR arrays ? (I'd guess that sparse float32 will be converted to float64 -- the [doc](https://docs.scipy.org/doc/scipy/reference/optimize.linprog-highs-ipm.html) doesn't say.) – denis Jan 24 '21 at 15:49

1 Answers1

0

@infinity77,

I am looking at using lpsolve now. It is fairly straight forward to generate a .lp file for input and it did solve several smaller problems. BUT... I am now attempting to solve a node coloring problem. This is a MIP. My first attempt at a 500 node problem ran about 15 minutes as an lp relaxation. lpsolve has been grinding on the true MIP since last night. Still grinding. These coloring problems are difficult but 14 hours with no end in sight is tooooo much.

The best alternative I am aware of is from coin-or.org;[Coin-OR Solvers] 1 Try their clp and cbc solvers depending on which type of problem you are solving. Benchmarks I have seen say they are the best choice outside of CPLEX and Gurobi. It is free but you will need to be sure it is "legal" for your purposes. A good benchmark paper by Bernhard Meindl and Matthias Templ at Open Source Solver Benchmarks

cswor
  • 1
  • 2
  • Hi, I don’t have that kind of issue as my problems are not MIPs - and I know that lpsolve is not the right tool for MIPs. Writing a file and sending it to a Coin-OR solver and read the results back is definitely going to be slower than what I have. What I was looking for was a tool that allows me to specify the entire matrix of constraints in one go and retrieve results via API calls. In a very fast way. That tool does not exist. – Infinity77 Sep 21 '20 at 17:49
  • I believe they have an api that would allow you to generate your problem and solve it via a call. Even the relaxation took almost 15 minutes. – cswor Sep 21 '20 at 19:02
  • @Infinity77, my bad. I see you did reference a couple of Coin tools. My only excuse, I'm a newbie at the open source tools for OR. – cswor Sep 21 '20 at 19:16
  • This is very late, but have you tried PULP. It give you access to the COIN solver via python. – cswor Aug 22 '22 at 05:09