Questions tagged [markov]

Markov, or markov property refers to the memoryless property of a stochastic process.

Overview

From Wikipedia,

A stochastic process has the Markov property if the conditional probability distribution of future states of the process depends only upon the present state, not on the sequence of events that preceded it.

A Markov process (Yt) can be expressed like below:

enter image description here

Tag usage

Please consider stack-exchange's Cross Validated SE for asking questions concerning statistics data analysis.

255 questions
1
vote
1 answer

PST: Error in names(StCol) <- A : attempt to set an attribute on NULL

Consider the following code: library(PST) library(TraMineR) library(RCurl) x <- getURL("https://gist.githubusercontent.com/aronlindberg/c79be941bc86274f4526705600962789/raw/6e3ee8d464c97f1c26631d604de41ca97dc22159/sequence_data.csv") data <-…
histelheim
  • 4,938
  • 6
  • 33
  • 63
1
vote
1 answer

R: Drawing markov model with diagram package (making diagram changes)

I have the following code that draws a transition probability graph using the package heemod (for the matrix) and the package diagram (for drawing). The following code generates such a graph with data that I have…
RalfB
  • 563
  • 1
  • 7
  • 22
1
vote
1 answer

How to find out values of Policy Iteration?

My teacher gave the following problem: Consider the following MDP with 3 states and rewards. There are two possible actions - RED and BLUE. The state transitions probabilites are given on the edges, and S2 is a terminal state. Assume that the…
blz
  • 403
  • 6
  • 12
1
vote
0 answers

How to expand a markov probability plot in R?

I am trying to turn a matrix into a probability plot using R. I have done this before, but this particular set of data is contained in an 8x8 matrix, so it is fairly large and in turn contains a lot of arrows. In consequence, the arrows all overlap…
Megan
  • 61
  • 1
1
vote
2 answers

Markov chain transition matrix outputing only '0.000's

I am attempting to produce a transformation matrix based on the first column of a csv file. There are 59 possible states, which are integers in a list. At the moment my output is simply a long list of '0.000s' - in the correct size as far as I can…
1
vote
0 answers

Hidden Markov Model: The current observation depends on the previous observation

This question is for the case of homogeneous discrete HMM's. In the regular HMM's, the probability of the current state depends only on the previous state, that is Pr(S_t|S_1,S_2,...,S_(t-1)) = Pr(S_t|S_(t-1)), and the probability of an output…
Ibrahim
  • 11
  • 1
1
vote
1 answer

Represent state space graph for Markov process for car racing example

Could anybody please help me with designing state space graph for Markov Decision process of car racing example from Berkeley CS188. car racing example For example I can do 100 actions and I want to run value iteration to get best policy to…
Pavel
  • 23
  • 3
1
vote
1 answer

Markov Chain Probability using simulations query

I am trying to make a graph in R that plots probability of hitting the first state before the final state in a markov chain for different values of k. But when plotting if i only get the final value of k and not for all k for 1 to 17. This is the…
Ahmed Jyad
  • 19
  • 3
1
vote
1 answer

Generate Kolmogorov-Chapman equations for Markov processes

I am looking for a way to generate Kolmogorov-Chapman equations for MathCad to solve Markov Chain problem. Problem is to find probability of the system being in one of the states. System has N components. I have a graph with 2^N nodes (states), and…
Rahul
  • 1,727
  • 3
  • 18
  • 33
1
vote
0 answers

How to implement Markov-switching VAR in python?

This is a link that shows how to implement Markov-switch AR model. However, I want to extend AR model to VAR model. Anyone knows how to do it in python? I appreciate your help.
Yuefan Zhu
  • 11
  • 2
1
vote
1 answer

How to implement a simple stride predictor?

I have implemented a contextual Markov predictor and I need to make a stride predictor to combine them into a hybrid predictor with confidence. For the beginning I need to implement this stride predictor. I read about it and I found this figure but…
Cătălin
  • 11
  • 2
1
vote
0 answers

Infinite loop in Modified Value Iteration(MDP GridWorld)

consider a simple GridWorld 3x4 with reward -0.04 [ ][ ][ ][+1] [ ][W][ ][-1] [ ][ ][ ][ ] where W is a wall, +1/-1 are terminal states. An agent can move in any direction, but only 80% of the times he succeeds in going to the planned direction,…
P. Lance
  • 179
  • 2
  • 13
1
vote
1 answer

AIC and BIC in Markov Switching MSwM package

I'm trying to understand some functionalities of the MSwM package so I can use it in a paper I'm writing. There are two things I just don't get while reproducing the example provided by the authors. The first one is related to the summary method…
Fer
  • 11
  • 4
1
vote
1 answer

calculate transition matrix for multiple sequences in matrix

Hi guys i am trying to calculate transition matrix for every sequences which is presented by each row in a matrix. For example, i have a matrix: dat<-matrix(c('a','b','c','a','a','a','b','b','a','c','a','a','c','c','a'),nrow = 3) > ` …
user3672160
  • 77
  • 1
  • 7
1
vote
1 answer

Storing values from a loop in a function in Matlab

I am writing a function in Matlab to model the length of stay in hospital of stroke patients. I am having difficulty in storing my output values. Here is my function: function [] = losdf(age, strokeType, dest) % function to mdetermine length of…
user3497570
  • 25
  • 1
  • 8