I have been learning optimization methods for a few days now. The following code that I wrote returned a RuntimeWarning
.
import numpy as np
from scipy.optimize import minimize
def func(a, x):
return 1 + (x - 0.5) * a
def log_like(a, x):
sum1 = 0
for i in range(len(x)):
sum1 += np.log(func(a, x[i]))
return sum1
def log_like_prime(a, x):
sum1 = 0
for i in range(len(x)):
sum1 += (x[i] - 0.5) / (1 + (x[i] - 0.5) * a)
return sum1
def log_like_prime2(a, x):
sum1 = 0
for i in range(len(x)):
sum1 += -(x[i] - 0.5) ** 2.0 / (1 + (x[i] - 0.5) * a) ** 2.0
return sum1
x = [0.89, 0.03, 0.50, 0.36, 0.49]
a = -1
a_opt = minimize(
log_like, a, args=(x,), method="Newton-CG",
jac=log_like_prime, hess=log_like_prime2
)
print(a_opt)
Returns the following error:
fun: array([0.03194467])
jac: array([0.18690836])
message: 'Warning: Desired error not necessarily achieved due to precision loss.'
nfev: 21
nhev: 1
nit: 0
njev: 21
status: 2
success: False
x: array([-1.])
py:17: RuntimeWarning: invalid value encountered in log
sum1 += np.log(func(a, x[i]))
py:17: RuntimeWarning: invalid value encountered in log
sum1 += np.log(func(a, x[i]))
It should not be returning an invalid value as for the given values of x = [0.89, 0.03, 0.50, 0.36, 0.49]
, no negative value must be returned by the function inside the logarithm part. I cannot understand why such a problem is there.