I am trying to maximize certain logLikelihood function, given trajectory T and parameter tMax, with respect to set o 2d + 2d^2 parameters X, where d is fixed integer.
Each parameter valid range is (0, 10), with exception to parameters with indexes 2*i+1 for i in range(d) (using Python convention). For those, the valid range is (-10, 10). Additionally, I create linear constraints, that for each i in range(d) X[2 * i] + X[2 * i + 1] * (tMax + 1.0) >=0
Here is implementation:
# T given
tMax = 500.0
_d = 2
# part of gradient for constraint X[2 * i] + X[2 * i + 1] * (tMax + 1.0) for i fixed in range(_d)
def grad_for_i(i, d, t_max):
g = np.zeros(2 * d + 2 * d**2)
g[2 * i] = 1.0
g[2 * i + 1] = t_max + 1.0
return g
# array of zeros of lenght l with 1 on index j
def one_on_jth(j, l):
r = [0.0 for _ in range(l)]
r[j] = 1.0
return r
new_lin_const = {
'type': 'ineq',
'fun' : lambda x: np.array(
[x[2 * i] + x[2 * i + 1] * (tMax+ 1.0) for i in range(_d)]
+ [x[j] for j in range(2*_d + 2*_d**2) if j not in [2 * i + 1 for i in range(_d)]]
),
'jac' : lambda x: np.array(
[grad_for_i(i, _d, tMax) for i in range(_d)]
+ [one_on_jth(j, 2*_d + 2*_d**2) for j in range(2*_d + 2*_d**2) if j not in [2 * i + 1 for i in range(_d)]]
)
}
X0 = [1.0 for _ in range(2 * (_d ** 2) + 2 * _d)]
bds = [(0.0, 10.0) for _ in range(2 * (_d ** 2) + 2 * _d)]
for i in range(_d):
bds[2*i + 1] = (-10.0, 10.0)
res = optimize.minimize(lambda x, args: -logLikelihood (x, args[0], args[1]),
constraints=new_lin_const, x0 = X0, args=([T, tMax]), method='SLSQP', options={'disp': True}, bounds=bds)
Procedures converges, but result given is out of linear constraints defined bounds:
print(res.x)
#array([ 1.38771114, -0.72145294, 1.3960635 , -0.22399423, 1.49987397,
# 1.45837707, 1.49958886, 1.45772475, 5.88312636, 5.83211339,
# 5.81175866, 5.67393651])
How is it possible, result is out of bounds?