2

I want to minimize a convex function on an area with bounds and constraints, therefore I try to use scipy.optimize.minimize with the SLSQP option. However my function is only defined at discrete points. Linear interpolation does not seem like an option as computing my functions in all of its values would take way too much time. As a Minimal Working Example I have:

from scipy.optimize import minimize
import numpy as np

f=lambda x : x**2

N=1000000
x_vals=np.sort(np.random.random(N))*2-1
y_vals=f(x_vals)

def f_disc(x, x_vals, y_vals):
    return y_vals[np.where(x_vals<x)[-1][-1]]

print(minimize(f_disc, 0.5, method='SLSQP', bounds = [(-1,1)], args = (x_vals, y_vals)))

which yields the following output:

     fun: 0.24999963136767756
     jac: array([ 0.])
 message: 'Optimization terminated successfully.'
    nfev: 3
     nit: 1
    njev: 1
  status: 0
 success: True
       x: array([ 0.5])

which we of course know to be false, but the definition of f_disc tricks the optimizer in believing that it is constant at the given index. For my problem I only have f_disc and do not have access to f. Moreover one call to f_disc can take as much as a minute.

HolyMonk
  • 432
  • 6
  • 17
  • 1
    Can you describe what you're trying to solve? I can't even plot your function to get a feeling for what we're looking at here... Also, `x_vals` and `y_vals` are both monotonically increasing, so wouldn't the `min` of them just be the first value? – Nils Werner Mar 27 '18 at 11:29
  • 1
    SLSQP really expects smooth functions. – Erwin Kalvelagen Mar 27 '18 at 11:41
  • @NilsWerner : The function is simply `f(x)=x**2` and the minimum in this case should the x value which is closest to zero. @ErwinKalvelagen : Is there some other optimization method that I can use which I can also give constraints and bounds? – HolyMonk Mar 27 '18 at 11:52
  • What about `x_vals[np.argmin(y_vals)]` then? – Nils Werner Mar 27 '18 at 11:55
  • The problem is that `np.argmin(y_vals)` requires us to compute `f(x)` for all `x_vals` which is exactly what we can not do. You should assume that only the function `f_disc` is given and each call to `f_disc` takes a long time. – HolyMonk Mar 27 '18 at 11:59

1 Answers1

2

If your function is not smooth gradient-based optimization techniques will fail. Of course you can use methods that are not based on gradients, but these usually require more function evaluations.

Here are two options that can work.

The nelder-mead method does not need a gradient, but it has the drawback that it cannot handle bounds or constraints:

print(minimize(f_disc, 0.5, method='nelder-mead', args = (x_vals, y_vals)))

 #  final_simplex: (array([[ -4.44089210e-16], [  9.76562500e-05]]), array([  2.35756658e-12,   9.03710082e-09]))
 #            fun: 2.3575665763730149e-12
 #        message: 'Optimization terminated successfully.'
 #           nfev: 32
 #            nit: 16
 #         status: 0
 #        success: True
 #              x: array([ -4.44089210e-16])

differential_evolution is an optimizer that does not make any assumptions about smoothness. It cannot only handle bounds; it requires them. However, it takes even more function evaluations than nelder-mead.

print(differential_evolution(f_disc, bounds = [(-1,1)], args = (x_vals, y_vals)))

#     fun: 5.5515134011907119e-13
# message: 'Optimization terminated successfully.'
#    nfev: 197
#     nit: 12
# success: True
#       x: array([  2.76298719e-06])
MB-F
  • 22,770
  • 4
  • 61
  • 116