1

I am working on extended Raftery's model which is a more general higher-order Markov chain model, in that I need to solve the following Linear Programming model with certain constraints.

Following is the (link) Linear Programming function that needs to be minimized:

subject to :

Where vectors "W" and "λ" are to be solved in the equation.

Q and X are i-step transition probability matrix and steady-state probabilities respectively.

Below is the sample I am working with:

import numpy as np

one_step_array = np.array([[0.12, 0.75, 0.12],
       [0.42, 0.14, 0.42],
       [0.75, 0.25, 0.0]])

two_step_array = np.array([[0.43, 0.23, 0.33],
       [0.43, 0.44, 0.11],
       [0.20, 0.59, 0.20]])

steady_state = np.array([0.38, 0.39, 0.21])

Q_Arr = np.vstack((np.matmul(one_step_array,steady_state),np.matmul(two_step_array,steady_state))).transpose()

from pulp import *

w1 = LpVariable("w1",0,None)
w2 = LpVariable("w2",0,None)
w3 = LpVariable("W3",0, None)
L1 = LpVariable("L1",0,None)
L2 = LpVariable("L2",0,None)

prob = LpProblem("Problem",LpMinimize)

prob += w1 >= steady_state[0] - Q_Arr[0][0]*L1 - Q_Arr[0][1]*L2
prob += w1 >= -steady_state[0] + Q_Arr[0][0]*L1 + Q_Arr[0][1]*L2

prob += w2 >= steady_state[1] - Q_Arr[1][0]*L1 - Q_Arr[1][1]*L2
prob += w2 >= -steady_state[1] + Q_Arr[1][0]*L1 + Q_Arr[1][1]*L2

prob += w3 >= steady_state[2] - Q_Arr[2][0]*L1 - Q_Arr[2][1]*L2
prob += w3 >= -steady_state[2] + Q_Arr[2][0]*L1 + Q_Arr[2][1]*L2

prob += w1 >= 0
prob += w2 >= 0
prob += w3 >= 0
prob += L1 >= 0
prob += L2 >= 0

prob += L1 + L2 == 1

prob += w1+w2+w3

status = prob.solve(GLPK(msg=0))
LpStatus[status]

print (value(w1))
print (value(w2))
print (value(w3))
print (value(L1))
print (value(L2))

Result is (λ1,λ2,w1,w2,w3) = (1,0,0.051,0.027,0.14) instead of (1,0,0.028,0.0071,0.0214) which is not correct.

Could you please let me know where Am I going wrong?

  • Hi Nadipineni, welcome to SO! IMO, it would help a lot if you post a minimal version of your attempts so far and *exactly* where it went wrong & what went wrong – en_Knight Sep 01 '18 at 04:22
  • @en_Knight I would love to present you with my attempts. However, I am standing clueless about how to proceed from here. Any suggestion would be highly appreciated. – Nadipineni Naimisha Sep 01 '18 at 04:25
  • @en_Knight Hey! I've tried solving it. I am able to Lamda values correctly, However my W vector is not correct. Could you please check now and let me know where am I going wrong – Nadipineni Naimisha Sep 02 '18 at 06:16

1 Answers1

0

Thanks for your review and help! I was able to answer the question myself. Here is the solution:

from pulp import *


Weight_vec = []
Number_of_states = Q_Arr.shape[0]
for x in range(Number_of_states):
    Weight_vec.append('w'+str(x+1))

L1 = LpVariable("L1",0,100)
L2 = LpVariable("L2",0,100)

prob = LpProblem("Problem",LpMinimize)

for s in range(Number_of_states):
    Weight_vec[s] = LpVariable('w'+str(s+1),0,None)
count = 0

for row in Q_Arr:
    prob += steady_state[0] - row[0]*L1 - row[1]*L2 - Weight_vec[count] <= 0
    print (steady_state[0] - row[0]*L1 - row[1]*L2 - Weight_vec[count] <= 0)
    prob += - steady_state[0] + row[0]*L1 + row[1]*L2 - Weight_vec[count] <= 0
    print (- steady_state[0] + row[0]*L1 + row[1]*L2 - Weight_vec[count] <= 0)
    count = count + 1

prob += L1 >= 0
prob += L2 >= 0

prob += L1 + L2 == 1

for s in range(Number_of_states):
    prob += Weight_vec[s] >= 0

#objective
prob += sum(Weight_vec)

status = prob.solve(GLPK(msg=0))
LpStatus[status]

result = []

for s in range(Number_of_states):
    result.append(value(Weight_vec[s]))
result.append(value(L1))
result.append(value(L2))

print (result)