2

I am playing with pymdptoolbox. It has a built-in problem of forest management. It can generate a transition matrix P and R by specifying a state value for forest function (default value is 3). The implementation of Q-Learning, PolicyIteration and ValueIteration to find the optimal policy is straightforward. However by creating a slightly more complicated problem by changing the state to a bit larger value than 4 (from 5 onwards), only PI and VI return the same policy while QL cannot find the optimal policy. This is very surprising and puzzling. Can anyone help me understand why is this for QL in this package?

By looking at the raw code of QL (using epsilon-greedy), it seems it ties the probability with iteration number, i.e. prob = 1 - (1/log(n+2)) and the learning rate is (1/math.sqrt(n+2)). Is there any specific reason why tying probability/learning rate to the iteration number, instead of making them independent variables (the code itself can be modified easily though).

I think my biggest puzzle is to understand why QL fails to find the policy for a vanilla problem. Thanks.

from mdptoolbox.mdp import ValueIteration, QLearning, PolicyIteration
from mdptoolbox.example import forest

Gamma = 0.99

states = [3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 20, 30, 50, 70, 100, 200, 300, 400, 500, 600, 700, 800, 900, 1000]

compare_VI_QI_policy = [] # True or False
compare_VI_PI_policy = []

for state in states:

    P, R = forest(state)

    VI = ValueIteration(P, R, Gamma)
    PI = PolicyIteration(P, R, Gamma)
    QL = QLearning(P, R, Gamma)

    ## run VI
    VI.run()

    # run PI
    PI.run()

    # run QL
    QL.run()

    compare_VI_QI_policy.append(QL.policy == VI.policy)
    compare_VI_PI_policy.append(VI.policy == PI.policy)

print compare_VI_QI_policy
print compare_VI_PI_policy
Maxim
  • 52,561
  • 27
  • 155
  • 209
Chenyang
  • 161
  • 1
  • 11

0 Answers0