0

I want to do maximum likelihood estimation using scipy minimize for a function of the form

y = a * x^b

where the errors are assumed to be normally distributed around the predictions. Currently, the true parameters cannot be recovered at all (see below).

The negative log-likelihood function:

def loglik(param, data):
    mu = 0.0
    logLikelihood = 0.0
    resid = 0.0
    for i in range(data.shape[0]):
        mu = param[0] * (data[i][0] ** param[1])
        resid = float(data[i][1] - mu) / param[2]
        logLikelihood += -0.5 * resid * resid

    return -(-data.shape[0] * np.log(param[2]) + logLikelihood)

Here's the code I'm using so far but as you'll see, the the parameters after convergence are far off the true ones.

def generate(param, x):
    pred = [(param[0] * (x ** param[1])) for x in x]
    return np.array([sum(x) for x in zip(pred, np.random.normal(0, param[2], len(x)))])

generate fake data

x = np.linspace(1, 50, num=100)
true_param = [15, 1.1, 5]
data = np.vstack((x, generate(true_param, x))).T

fit model

from scipy.optimize import minimize
initParams = [1,1,1]
result = minimize(loglik, method='Nelder-Mead', x0=initParams,args=data)

print(result.x) # should be 15, 1.1, 5
beginneR
  • 3,207
  • 5
  • 30
  • 52
  • This may be due to the `initParams` estimate not been sufficiently close to the actual value (the solver gets stuck to some local minimum). The expected result is obtained when, e.g., `initParams = [2,1,1]` – Stelios Jul 25 '16 at 13:58
  • Thanks! That's somewhat surprising to me. But you're right, it seems that this small change is sufficient. – beginneR Jul 25 '16 at 14:28
  • This is a typical case when the optimization problem is not convex (as in your case) with the final solution (strongly) depending on how good the initial estimate is – Stelios Jul 25 '16 at 14:34

0 Answers0