I am trying to minimize a very long function (it is the sum of 500000 parts of sub-functions) in order to fit some parameters to a probabilistic model. I use the scipy.optimize.minimize
function. I tried both Powell
and Nelder-Mead
algorithms, and Powell looks really faster in my settings. But still, I really don't understand how to force the process to give me some results after a given time, even if they are not "optimal".
I fill the options maxiter
, maxfev
, xtol
and ftol
, but I don't really understand these options, since I tried to put a print
in my function and I noticed that the algorithm evaluate it more than maxfev
times, but when It reaches the maxiter point, It sends an error "max number of iterations reached".
How do they work with respect to the two algorithms I am using? The doc is very unclear.
My code:
def log_likelihood(r, alpha, a, b, customers):
if r <= 0 or alpha <= 0 or a <= 0 or b <= 0:
return -np.inf
c = sum([log_likelihood_individual(r, alpha, a, b, x, tx, t) for x, tx, t in customers])
print -c
return c
negative_ll = lambda params: -log_likelihood(*params,customers=customers)
params0 = (1, 1, 1, 1)
res = minimize(negative_ll, params0, method='Powell', callback=print_callback, options={'disp': True, 'ftol':0.05, 'maxiter':3, 'maxfev":15})