3

Edit: Simplex the mathematical optimization algorithm, not to be confused with simplex noise or triangulation.

I'm implementing my own linear programming solver and I would like to do so using 32bit floats. I know Simplex is very sensitive to the precision of the numbers because it performs lots of calculations and if too little precision is used, rounding errors may occur. But still, I would like to implement it using 32bit floats so I can make the instructions 4-wide, that is, so I can use SIMD to perform 4 calculations at a time. I'm aware that I could use doubles and make instructions 2-wide, but 4 is greater than 2 :)

I have had problems with my floating point implementation where the solution was suboptimal or the problem was said to be unfeasible. This happens especially with mixed integer linear programs, which I solve with the branch and bound method.

So my question is: how can I prevent as much as possible having rounding errors resulting in unfeasible, unbounded or suboptimal solutions?

I know one thing I can do is to scale the input values so that they are close to one (http://lpsolve.sourceforge.net/5.5/scaling.htm). Is there something else I can do?

Davi Doro
  • 358
  • 2
  • 8
  • To my understanding, the title is a bit misleading; the parallelization of the implementation as such is unrelated to the numerical stability of the algorithm. Please clarify. – Codor Feb 15 '19 at 14:40
  • 1
    Do you actually have problems with precision, or do you assume that you may get problems? What part of your algorithm did you implement with SIMD, what precision do you expect? – chtz Feb 15 '19 at 14:42
  • What do you mean with parallelization? I don't want to parallelize it. I'm going to make instructions wide, my implementation is not multithreaded, at least not yet. – Davi Doro Feb 15 '19 at 14:44
  • "perform[ing] lots of calculations" does not necessary imply "very sensitive to the precision", btw. Especially, iterative algorithms which converge to a fixpoint are usually very stable to numerical noise. (I'm no expert on the simplex algorithm, though). – chtz Feb 15 '19 at 14:47
  • I haven't SIMDified anything yet. I have had problems with precision leading to suboptimal solutions using single 32 bit floats, so I want to address as many of the precision problems before I SIMDify the instructions. – Davi Doro Feb 15 '19 at 14:48
  • 1
    So your question is actually unrelated to SIMD (yet), except that going to SIMD eventually is a motivation to use `float` instead of `double`. I'd suggest renaming that to something like "Numerical stability of Simplex Algorithm" (maybe also find a Tag for that). And I don't know if all simplex algorithms have precision problems, or if you just have a poor implementation. – chtz Feb 15 '19 at 14:52
  • You are correct, it is acctually unrelated to SIMD yet. I've edited the question. – Davi Doro Feb 15 '19 at 15:03
  • 1
    This may be too broad a question for Stack Overflow. A cursory web search shows there are algorithms for implementing the simplex method that are stable and algorithms for implementing it that are not stable. Clearly, if you are concerned about accuracy, you should prefer an algorithm that is stable. But evaluating the known algorithms and recommending one is off-topic for Stack Overflow. – Eric Postpischil Feb 15 '19 at 15:24
  • Only about 7 decimal digits precision. You must be brave. I think here is more interest in quad precision than in single precision solvers. – Erwin Kalvelagen Feb 15 '19 at 23:25
  • The statement in your [link](http://lpsolve.sourceforge.net/5.5/scaling.htm) about scaling is bad—it is either nonsense (the point of floating-point is that the point floats—scaling is built-in, and the arithmetic works the same for each exponent up to its bounds), or it is based on some flaw in the lpsolve software it is describing (why would the software have any absolute scale dependency?), or it is poorly written or otherwise unclear what it is trying to convey. – Eric Postpischil Feb 16 '19 at 13:07
  • I find this link to actually be quite understandable, and it definitely isn't nonsense. You are right in that, subtracting large number from large nubers and small numbers from small numbers doesn't make a difference. But the post talks explictely about subtracting small numbers from big numbers. You can lose a lot of precision there, because the small number is not representable well with the exponent of the big number. The software does not have any absolute scale dependency! – pulp_user Dec 06 '19 at 13:19

1 Answers1

0

Yes, I tried to implement an algorithm for the Extended Knapsack problem using the Branch and bound method and Greedy Algorithm as a heuristic, is the exact analogue of the simplex running with a pivoting strategy that chooses the largest objective increase.

I had problems with the numerical stabilities of the algorithm too.

Personally, I don't think there is an easy way to eliminate the issues if we keep using the floating-point, but there is a way to detect the instability during the branching process.

I think, via experiment instead of rigorous maths on Stability Analysis, the majority of errors propagate through an integer solution that is extremely close to the constraints of the system.

Given any integer solution, we figure out the slack for that solution, and if the elements of the vector are extremely small, or on the magnitude of 1e-14 to 1e-15, then stop the branching and report instability.

Alto Lagato
  • 17
  • 1
  • 3