I am running multiple bounded optimizations with known gradient (100 to 300 Variables). Sometimes, TNC is returning "unable to progress".
For my objective function L-BFGS-B is much slower and outputs poor results compared to TNC. (Maybe because TNC is better when number of variables are large) Using Basinhopping with L-BFGS-B and niter_success to 10, I am getting results close to TNC with 20x slower speed. When TNC returns "unable to progress", "L-BFGS-B" returns better results. So my current solution is to run Basinhopping when TNC fails with status 6 - "unable to progress".
It seems that "unable to progress" is returned when TNC is unable to reduce the objective function for x number of iterations. I played a little with the scale factor and inconsistently I got better results.
To my knowledge, Scale in an optimization problem let's the optimizer know which variable is more effective. I have this information and I believe this will reduce number of "unable to progress" I am getting. According to the docs, https://docs.scipy.org/doc/scipy/reference/optimize.minimize-tnc.html "The default scale array are up-low for interval bounded variables and 1+|x] fo the others". So it's upper bound - lower bound for variables which are bounded, unable to understand how unbounded is treated. what is 1+|x] ?
Also, I manually calculated up - low and set unbounded to 1, this is returning different results every time I run the optimization with the same input. (Strange ?)
I also tried to look into the code, how TNC is handling scale, the spicy wrapper sends an empty array or the input array to C code https://github.com/scipy/scipy/blob/master/scipy/optimize/tnc/moduleTNC.c. In the C code I am unable to find where the scale array is created or how it's used. Also could not find when "unable to progress" is triggered. Can someone point me where I should look into ?