0

I am using scipy minimize function, mostly the BFGS method. I need to find how many function evaluation were executed between 2 following iterations. This function evaluation usually aim to calculate numerical derivatives.

If it is possible to find how many gradient evaluation were calculated between iterations, that would be even better.

Example of code:

def callback_params(theta):
    global params
    params = np.vstack((params, theta))

def rosen(X):
    return (1.0 - X[0])**2 + 100.0 * (X[1] - X[0]**2)**2 + \
           (1.0 - X[1])**2 + 100.0 * (X[2] - X[1]**2)**2

init = np.random.rand(3)
params = np.empty([0, 3])

res = minimize(rosen,init, method='BFGS',options = {'disp': True}, callback=callback_params)

How can I know the number of function evaluation between 2 rows in params?

Tamuzd
  • 33
  • 3

2 Answers2

0

The function scipy.optimize.minimize returns an OptimizeResult where one of the members is

nfev, njev, njev: int
Number of evaluations of the objective functions and of its Jacobian and Hessian.

Cory Kramer
  • 114,268
  • 16
  • 167
  • 218
  • But it returns the whole optimization process. I need to know how many function evaluations were sampled to make one iteration step. The sum of all the iterations and their numerical derivatives would give the output of [OptimizeResult](https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.OptimizeResult.html#scipy.optimize.OptimizeResult)_italic_. – Tamuzd Jun 07 '22 at 05:21
0

You can exploit the fact that scipy.optimize.minimize calls the passed callback after each iteration:

def callback(xk, d):
    d['obj_evals'].append(0)
    d['iters'] += 1

def rosen(X, d):
    d['obj_evals'][d['iters']] += 1    
    return (1.0 - X[0])**2 + 100.0 * (X[1] - X[0]**2)**2 + \
           (1.0 - X[1])**2 + 100.0 * (X[2] - X[1]**2)**2

init = np.random.rand(3)

d = {'obj_evals': [0], 'iters': 0}
obj_fun = lambda x: rosen(x, d)
cb = lambda xk: callback(xk, d)

res = minimize(obj_fun, init, method='BFGS',options = {'disp': True}, callback=cb)

Then, d['obj_evals'] contains the number of objective function evaluations for each iteration. This idea can easily be extended to count the number of gradient evaluations given you pass the gradient to minimize as well.

joni
  • 6,840
  • 2
  • 13
  • 20