So even if the value of eligibility trace is non zero for previous states, the values of delta will be zero in above case (because initially rewards and utility function is initialized 0). Then how is it possible for previous states to get other utility values than zero in first update?
You are right that, in the first update, all rewards and updates will still be 0
(except if we already manage to reach the goal in a single step, then the reward won't be 0
).
However, the eligibility traces e_t
will continue to "remember" or "memorize" all the states that we have previously visited. So, as soon as we do manage to reach the goal state and get a non-zero reward, the eligibility traces will still remember all the states that we went through. Those states will still have non-zero entries in the table of eligibility traces, and therefore all at once get a non-zero update as soon as you observe your first reward.
The table of eligibility traces does decay every time step (multiplied by gamma * lambda_
), so the magnitude of updates to states that were visited a long time ago will be smaller than the magnitude of updates to states that we visited very recently, but we will continue to remember all those states, they will have non-zero entries (under the assumption that gamma > 0
and lambda_ > 0
). This allows for values of all visited states to be updated, not as soon as we reach those states, but as soon as we observe a non-zero reward (or, in epochs after the first epoch, as soon as we reach a state for which we already have an existing non-zero predicted value) after having visited them at some earlier point in time.
Also in the given python implementation, following output is given after a single iteration:
[[ 0. 0.04595 0.1 0. ]
[ 0. 0. 0. 0. ]
[ 0. 0. 0. 0. ]]
Here only 2 values are updated instead of all 5 previous states as shown in the figure. What I'm missing here?
The first part of their code looks as follows:
for epoch in range(tot_epoch):
#Reset and return the first observation
observation = env.reset(exploring_starts=True)
So, every new epoch, they start by resetting the environment using the exploring_starts
flag. If we look at the implementation of their environment, we see that the usage of this flag means that we always start out with a random initial position.
So, I suspect that, when the code was run to generate that output, the initial position was simply randomly selected to be the position two steps to the left of the goal, rather than the position in the bottom-left corner. If the initial position is randomly selected to already be closer to the goal, the agent only ever visits those two states for which you see non-zero updates, so those will also be the only states with non-zero entries in the table of eligibility traces and therefore be the only states with non-zero updates.
If the initial position is indeed the position in the bottom-left corner, a correct implementation of the algorithm would indeed update the values for all the states along that path (assuming no extra tricks are added, like setting entries to 0
if they happen to get "close enough" to 0
due to decaying).
I'd also like to note that there is in fact a mistake in the code on that page: they do not reset all the entries of the table of eligibility traces to 0
when resetting the environment / starting a new epoch. This should be done. If this is not done, the eligibility traces will still memorize states that were visited during previous epochs and also still update all of those, even if they were not visited again in the new epoch. This is incorrect. A correct version of their code should start out like this:
for epoch in range(tot_epoch):
#Reset and return the first observation
observation = env.reset(exploring_starts=True)
trace_matrix = trace_matrix * 0.0 # IMPORTANT, added this
for step in range(1000):
...