I'm trying to find global maximum of a Python function with a lot of parameters (500+). Unfortunately I'm unable to make a derivative of this function - what the function does is basically it loops several times over np.array of shape ~ (150000,50) and then does some calculations with the data.
So far I was using scipy.optimize.minimize, method=Powell which seemed to give the best results out of the scipy.optimize.minimize methods.
At first I thought the output of the minimize function was final and the best result that could be found - but then I found out that in some cases - when I save the coefficients and run the minimize function again with the coefficients from the previous run as starting values it gives higher values than the previous run. So what I'm basically doing is the following:
import numpy as np
from scipy.optimize import minimize
from numba import jit
@jit (nopython=True)
def MyFunction (coefs, data):
# do some calculations
return value*-1
data = np.load('myData.npy', allow_pickle=True)
coefs = np.random.sample(500)
res = minimize(MyFunction, coefs, args = (data), method='Powell', options={'maxiter': 100000000, 'disp': True, 'return_all': True})
for a in range (0, 10000):
res = minimize(MyFunction, coefs, args = (data), method='Powell', options={'maxiter': 100000000, 'disp': True, 'return_all': True})
coefs = res.x
- Is there some more effective way?
- How can I speed up the code? Yes I already made the code much faster with using jit. But what about threading? Or some other idea?