1

I am using LPSolve for a system with approximately 40,000 variables, 100 sum-of-a1*x1 = c1 constraints and 40,000 x1 < n constraints. Normally I am getting a solution, but occasionally LPSolve doesnt reach a solution whereas running the same data through a similar linear programming system in MatLab will provide a solution. From observation of goodness of fit of the MatLab solutions to the sum-constraints it seems to me that MatLab is willing to accept a greater final error (parts in 10^-6) whereas the LPSolve solution (if it does solve) is always good to around parts in 10^-12.

I have tried the LPSolve manuals/tutorials etc but have not found a way to reduce the required quality of fit. There are a lot of variables/settings which the tutorials explain can be changed, but don' appear to explain what those changes actually mean (my background expertise is certainly not in linear programming). So my questions are:

Is my assumption valid - that LPSolve by default requires the solution to have a very small error and is somehow different to what Matlab's linprog does?

If so, can this requirement be relaxed - and if so how?

Penguino
  • 2,136
  • 1
  • 14
  • 21
  • Perhaps [`set_obj_bound`](http://lpsolve.sourceforge.net/5.1/set_obj_bound.htm) and [`set_mip_gap`](http://lpsolve.sourceforge.net/5.1/set_mip_gap.htm) might be what you want? If you are using the [standalone lp_solve](http://lpsolve.sourceforge.net/5.1/lp_solve.htm) program they can be set through the `-b` and `-ga`/`-gr` options. – jodag Aug 10 '17 at 03:04
  • I am not following what you are saying about the problem with `LpSolve` but I would agree that Matlabs `linprog` solver is a stronger solver than `LPSolve`. – Erwin Kalvelagen Aug 11 '17 at 06:13

0 Answers0