I have some difficulties in understanding how performances in non-linear optimisation are influenced by the specific way the solver engine is interfaced.
We have an optimisation model that, in its first version, was written in GAMS. IPOPT (a common FOOS non-linear solver engine) was returning an execution time for each optimisation of 1.4 CPU seconds in IPOPT (w/o function evaluations) and 0.2 CPU seconds in function evaluation.
When we converted the model to C++ (for a better accounting of the non-optimisation components of the model) and interfaced IPOPT trough its C++ API (using ADOL-C and ColPack for AD) we got execution times of 0.7 secs in IPOPT and 9.4 secs in function evaluation (the improvement in IPOPT is likely due to the fact that, compiling IPOPT by source, we were able to use better linear solvers non available in the GAMS version of IPOPT).
So, using C++, admittedly using a badly optimised code, gave us results ~50 times slower than GAMS, partially compensated by better solver time.
We are now evaluating the feasibility to convert the model in other languages, either Python with Pyomo, or Julia with JuMP.
But we would like first to understand how the function evaluation made by the solver at each step depends from the specific language implemented.
With C++, it's pretty evident that the functions making the optimisation models are directly executed (evaluated) at each iteration, so the way they are implemented does matter (and in particular, gradient and hessian are recomputed each time, at least in our implementation).
How is with Pyomo and JuMP? Would it be each iteration evaluated in Python and Julia, or Pyomo and JuMP would instead render first the model in (I guess) C, compute (not evaluate) the gradient and hessian once for all, and then is this "C version" that would be evaluated each time ? It clearly would make a big difference, especially for python..