-1

I am pretty new to this area, so the question that I ask might be straight forward or look naive for other professionals. For a 1D random walk problems, such as drunkard's walk problem, there is no connection between the current move and the previous move, and this problem can be easily solved by absorption Markov Chain method. However, what would happen, if we assume:

(1) the drunkard would have 70% chance walk forward, if previous step was forward, and 30% backward; and

(2) the drunkard would have 30% chance walk forward, if previous step was backward, and 70% backward.

Is there anyway or any recommendation for solving this kind of questions? BTW Monte Carlo is not considered as an excellent option. I would really appreciate the help.

Lexus00
  • 3
  • 1
  • 1
    Sounds like Markov chain to me. Or diffusion with an imposed gradient. – duffymo Mar 09 '15 at 18:38
  • I agree. Markov chains are the way to go if you observing a random variable with a state. – eigenchris Mar 09 '15 at 20:39
  • Markov chains are indepedent of the past. If previous step is taken into the equation, then this is NOT a markov chain. A markov chain would have the same probability whatever was the last move. – Jazzwave06 Mar 09 '15 at 21:54
  • @sturcotte06 You're right that Markov chains don't depend on the past, but they are allowed to depend on the current state. The current state can simply be "whatever direction we just stepped in", so we can keep track of whether our last step was forward or backward and decide on the probabilities from there. – eigenchris Mar 10 '15 at 04:55

1 Answers1

1

Your state has to contain the last position, so that you have transitions

(-1,-1) --> (-1,-1)
(+1,+1) --> (+1,+1)

with 70% probability

and

(-1,+1) --> (+1,-1)
(+1,-1) --> (-1,+1)

with 30% probability each.

Lutz Lehmann
  • 25,219
  • 2
  • 22
  • 51