Questions tagged [markov]

Markov, or markov property refers to the memoryless property of a stochastic process.

Overview

From Wikipedia,

A stochastic process has the Markov property if the conditional probability distribution of future states of the process depends only upon the present state, not on the sequence of events that preceded it.

A Markov process (Yt) can be expressed like below:

enter image description here

Tag usage

Please consider stack-exchange's Cross Validated SE for asking questions concerning statistics data analysis.

255 questions
3
votes
0 answers

Building a time-inhomogeneous Markov chain in Python

using the search function did not help me to find a solution for my problem which is why I created this post. First of all, I am fairly new to Python and therefore, my knowledge is limited. I am analysing a data set, which is based on time use…
3
votes
1 answer

First-Visit vs Every-Visit Monte Carlo

I have recently been looking into reinforcement learning. For this, I have been reading the famous book by Sutton, but there is something I do not fully understand yet. For Monte-Carlo learning, we can choose between first-visit and every-visit…
3
votes
1 answer

R markov chain package, is possible to set the coordinates and size of the nodes?

I'm working with R in some biology-behavioural problems, and I have a transition matrix which I want to plot in a certain way. I'm using the markovchain package, which makes easy the visualization. This is a test-code and it's output. >…
3
votes
0 answers

Markov processes with higher order functions in R

I'm looking for a clean way to run simulations where each iteration depends on the result of the preceding iteration in a "functional style." For example, say we want to take a normally distributed sample of size 10 with mean meanInit, take the mean…
3
votes
2 answers

Markov library/samples in F#

I am working on a personal project with F# and would like to experiment with F# and Markov models. Can anyone recommend a library/sample with source that supports Markov modeling? Since this is a personal project I would prefer something that is…
jnoss
  • 2,049
  • 1
  • 17
  • 20
3
votes
1 answer

Using Markov chains for procedural music generation

Does anyone know of an online resource where I can find stochastic matrices for an nth order Markov chain describing the probability of a note being played based on the previous n notes (for different musical genres, if possible)? I am looking for…
3
votes
2 answers

Calculating standard deviations in Stata to approximate beta distributions

My question relates to calculating the standard deviation (SD) of transition probabilities derived from coefficients estimated through Weibull regression in Stata. The transition probabilities are being used to model disease progression of leukemia…
Emily
  • 31
  • 3
3
votes
2 answers

clojure simple markov data transform

If I have a vector of words for example ["john" "said"... "john" "walked"...] and I want to make a hash map of each word and the number of occurrences of next word for example {"john" {"said" 1 "walked" 1 "kicked" 3}} The best solution I came up…
user2150839
  • 483
  • 5
  • 11
3
votes
1 answer

Intuition behind policy iteration on a grid world

I am supposed to come up with an MDP agent that uses policy iteration and value iteration for an assignment and compare its performance with the utility value of a state. How does an MDP agent, given that it knows the transition probabilities and…
kkh
  • 4,799
  • 13
  • 45
  • 73
2
votes
1 answer

Markov decision process - how to use optimal policy formula?

I have a task, where I have to calculate optimal policy (Reinforcement Learning - Markov decision process) in the grid world (agent movies left,right,up,down). In left table, there are Optimal values (V*). In right table, there is sollution…
OldFox
  • 425
  • 2
  • 7
  • 19
2
votes
2 answers

Why does my markov chain produce identical sentences from corpus?

I am using markovify markov chain generator in python and when using the example code given there it produces a lot of duplicate sentences for me and I don't know why. The code is as follows: import markovify # Get raw text as string. with…
2
votes
1 answer

N-sided die MDP problem Value Iteration Solution Needed

I'm working on a problem for one of my classes. The problem is this: a person starts with $0 and rolls an N-sided dice (N could range from 1 to 30) and wins money according to the dice side they roll. X sides (ones) of the N-sided die result in…
2
votes
2 answers

Why introduce Markov property to reinforcement learning?

As a beginner of deep reinforcement learning, I am confused about why we should use Markov process in reinforcement learning, and what benefits it brings to reinforcement learning. In addition, Markov process requires that under the "known"…
曹子轩
  • 31
  • 1
2
votes
1 answer

What is terminal state in gridworld?

I am learning markov decision process. Am I don't know where to mark terminal states. In 4x3 grid world, I marked the terminal state that I think correct(I might be wrong) with T. Pic I saw an instruction mark terminal states as…
user13612530
2
votes
1 answer

Implementing a discrete Markov Chain simulation in c++ with a graphical interface

I just wanted to know if any one had any pointers for a library or libraries that support Markov modelling and graphical graph representation, as for a project i must simulate a transport model and be able to develop an interface for it too. I am…
shogeluk
  • 23
  • 4
1 2
3
16 17