0

I have some difficulties in understanding how performances in non-linear optimisation are influenced by the specific way the solver engine is interfaced.

We have an optimisation model that, in its first version, was written in GAMS. IPOPT (a common FOOS non-linear solver engine) was returning an execution time for each optimisation of 1.4 CPU seconds in IPOPT (w/o function evaluations) and 0.2 CPU seconds in function evaluation.

When we converted the model to C++ (for a better accounting of the non-optimisation components of the model) and interfaced IPOPT trough its C++ API (using ADOL-C and ColPack for AD) we got execution times of 0.7 secs in IPOPT and 9.4 secs in function evaluation (the improvement in IPOPT is likely due to the fact that, compiling IPOPT by source, we were able to use better linear solvers non available in the GAMS version of IPOPT).

So, using C++, admittedly using a badly optimised code, gave us results ~50 times slower than GAMS, partially compensated by better solver time.

We are now evaluating the feasibility to convert the model in other languages, either Python with Pyomo, or Julia with JuMP.

But we would like first to understand how the function evaluation made by the solver at each step depends from the specific language implemented.

With C++, it's pretty evident that the functions making the optimisation models are directly executed (evaluated) at each iteration, so the way they are implemented does matter (and in particular, gradient and hessian are recomputed each time, at least in our implementation).

How is with Pyomo and JuMP? Would it be each iteration evaluated in Python and Julia, or Pyomo and JuMP would instead render first the model in (I guess) C, compute (not evaluate) the gradient and hessian once for all, and then is this "C version" that would be evaluated each time ? It clearly would make a big difference, especially for python..

Antonello
  • 6,092
  • 3
  • 31
  • 56
  • 1
    The first questions which come to mind: Are those two implementations equal? Is the data equal? The model? The random-seeds? Do you obtain the same solution? I'm pretty sure some of these questions are answered with a no. If two implementations are not doing the completely same thing it's hard to evaluate, especially in solving hard problems where heuristic-like behaviour is at the core! This would imply, that a much bigger benchmark is needed to really analyse this. I would not expect a huge impact on the function eval, maybe the automatic differentiation. But that's just guessing. – sascha Nov 25 '16 at 12:04
  • 1
    (1) The GAMS solver IPOPTH uses MA27 often getting better performance than IPOPT which uses MUMPS (2) GAMS function and gradient evaluation may share some subexpressions; sometimes this can save some time (3) You can quickly try out pyomo by letting the CONVERT solver translate the model into (ugly, scalar) Pyomo code. – Erwin Kalvelagen Nov 26 '16 at 06:17

2 Answers2

2

Pyomo interfaces to Ipopt by converting the model to the NL file format. It assumes the "ipopt" executable is in your PATH (Ipopt compiled with ASL). All function evaluations that take place during optimization happen in C within the Ampl Solver Library.

Gabe Hackebeil
  • 1,376
  • 1
  • 8
  • 10
1

JuMP has compared favorably with GAMS in our own benchmarks; take that as you may. The derivative computations are entirely in Julia (which is fast), no compiled C code.

mlubin
  • 943
  • 5
  • 10