0

I am trying to minimize a function of 76 parameters. I want every parameter value to be between 0 and 5.

The default algorithm LBFGS failed to find an optima.

So, I tried the TNC algorithm and while it achieves a good objective function value, the solution it gives me ignores the bounds. Its output doesn't seem to indicate any failure.

x0 = np.zeros(2 * S)
bnds = tuple([(0,5)]*2*S)
r = scipy.optimize.minimize(mse, x0, method = 'TNC', bounds=bnds, callback=print_callback, options={'disp': True})

This is the output of the optimization by TNC:

(prior_every_personb_mse_sg.py:8229): Gdk-CRITICAL **: 15:06:09.828: gdk_cursor_new_for_display: assertion 'GDK_IS_DISPLAY (display)' failed

(prior_every_personb_mse_sg.py:8229): Gdk-CRITICAL **: 15:06:09.831: gdk_cursor_new_for_display: assertion 'GDK_IS_DISPLAY (display)' failed
  NIT   NF   F                       GTG
    0    1  2.544667777777778E+03   1.71075138E+22
tnc: fscale = 1.5291e-12
tnc: |xn-xn-1] = 9.73002e-10 -> convergence
    1   27  1.862298888888889E+03   2.84054419E+22
tnc: Converged (|x_n-x_(n-1)| ~= 0)
Sanit
  • 80
  • 9
  • 1
    what about starting with a midpoint e.g. `x0 = np.full( 2*S, fill_value=2.5)` and not on the boundary? – Learning is a mess Jul 04 '19 at 14:20
  • @Learningisamess Ok, I am trying that. This optimization takes many hours so I'll reply once it is done. But given that 0 is part of the region I want it to search in, should this actually matter? – Sanit Jul 04 '19 at 14:25
  • 1
    If you want an optimiser which is usually faster than `scipy.optimize` I suggest you to have a look at `Pyomo` fyi. – Learning is a mess Jul 04 '19 at 14:27
  • Initializing all parameters to 2.5 did not help. I am still getting values outside the bounds. Weirdly, now all the 76 values are of the form (some integer)(decimal point)5 – Sanit Jul 04 '19 at 15:58
  • Can you share the function `mse`? – Cleb Jul 04 '19 at 18:02
  • @Cleb Unfortunately, not. The function mse itself calls on a lot of other functions and so it wouldn't be useful to look at the code of that function. I can describe what it does though. The 76 parameters specified are the parameters of a reinforcement learning agent and the mse function returns the mean squared error between the performance (rewards) of the described agent and some pre defined agent. I'd understand if scipy was not able to optimize this function given how complicated it is, the part that I find odd is that it claims successful convergence while being outside the bounds. – Sanit Jul 05 '19 at 09:25
  • @Cleb Any ideas what might be going wrong? – Sanit Jul 09 '19 at 09:56
  • No. I would have also tried different initial conditions as suggested earlier. You could also try to make the bounds more flexible and see whether that lead to anything. Without knowing the function it is hard to tell what goes wrong - at least for me. – Cleb Jul 09 '19 at 12:21

0 Answers0