Questions tagged [markov]

Markov, or markov property refers to the memoryless property of a stochastic process.

Overview

From Wikipedia,

A stochastic process has the Markov property if the conditional probability distribution of future states of the process depends only upon the present state, not on the sequence of events that preceded it.

A Markov process (Yt) can be expressed like below:

enter image description here

Tag usage

Please consider stack-exchange's Cross Validated SE for asking questions concerning statistics data analysis.

255 questions
2
votes
2 answers

markov transition matrix from sequence of doctor visits for different patients

I am trying to create a markov transition matrix from sequence of doctor visits for different patients. In my markov model states are the different doctors and connections are visits by patients. A patient can stay with the same provider or…
Dr. Turkuaz
  • 39
  • 1
  • 10
2
votes
1 answer

R - Need help on Multi-State Markov and Block Bootstrap please

First of all I have to stay that I am very new to R and never had an experience with Markov Analysis or Bootstrap before. I have been researching about these for some time but couldn't find a solution so decided to post this question. I have a…
PBS115
  • 21
  • 2
2
votes
0 answers

markov first order text processing in python

I wrote codes for text generating from a given text file. I use markov first order model. First create dictionary from text file. In case of punctuation ('.','?','!') it key is '$'. After creating dictionary I generate text randomly from the created…
ohid
  • 824
  • 2
  • 8
  • 23
2
votes
1 answer

Confusion about Markov random fields in the mgcv package in R

In order to implement a spatial analysis, I tried a simple Markov random field smoother in an example in the mgcv package in R, where the manual is…
cchien
  • 46
  • 5
2
votes
2 answers

Ergodic Markov chain stationary distribution: solving eqns

I am trying to solve a set of equations to determine the stationary distribution of an ergodic Markov matrix. Namely, the matrix is P=[0 0 0 0.5 0 0.5; 0.1 0.1 0 0.4 0 0.4; 0 0.2 0.2 0.3 0 0.3; 0 0 0.3 0.5 0 0.2; 0…
Winston
  • 143
  • 1
  • 4
2
votes
1 answer

R Package 'MSwM' has inconsistent result

mod.mswm <- msmFit(lm(y~x),k=2,p=1,sw=rep(T,4),control=list(maxiter=700,parallel=F)) summary(mod.mswm) I have a inconsistent result in this r packages.. First Running -> regime 1 = "estimate = 0.05" regime 2 = "estimate = 0.90" Second…
2
votes
1 answer

Algorithms for learning user inputs, and for offering suggestions

I'm searching for an algorithm respectively a method for learning user actions (inputs) in a certain program, and, based on a built information base of done user actions, to offer suggestions for future actions to the user. The information base…
fotinsky
  • 972
  • 2
  • 10
  • 25
2
votes
1 answer

Incorrect Number of probabilities simulating a Markov chain

my transition probability matrix is like this BP IP SP BPBP 0.4586757 0.3772354 0.1640889 IPBP 0.3489484 0.4746654 0.1763862 SPBP 0.3756522 0.4162319 0.2081159 BPIP …
Rup Mitra
  • 41
  • 1
  • 2
  • 4
2
votes
1 answer

Java - Taking character frequencies, creating probabilities, and then generating pseudo-random characters

I'm creating a pseudo-random text generator using a Markov model. Basically, I use a hash table to store lists of substrings of order k(the order of the Markov model), then for each substring I have a TreeMap of the suffixes with their frequencies…
user1547050
  • 337
  • 2
  • 7
  • 15
2
votes
1 answer

Continuous Time Markov Process

What are the methods to solve a CT Markov process? I know that for known processes such as birth-death or some queues, problem can be solved analytically? However, how to solve if it is not analytically solvable? It looks that numerical method…
justin
  • 99
  • 9
1
vote
1 answer

Minimize the L1-Norm between [a*A(x1,x2)-b] by finding unknown coefficients/probabilities x1 and x2

I have the following problem: I am trying to write an optimisation routine in Julia that calculates the potentially unknown coefficients of a transition probability matrix that guarantees me I get from state vector a to new state vector b. As A is a…
HeroldEcon
  • 11
  • 1
1
vote
1 answer

How to calculate period of each state in markov chain?

library(markovchain) P <- matrix(c(0, 0.5, 0.5, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0.25, 0.5, 0.25, 0, 0, 0, 0.5, 0.25, 0.25, 0, 0, 0, 0, 0, 1), nrow = 6, ncol = 6, byrow =…
1
vote
0 answers

Markov Decsision Process: Implmenting Q Value Problem

I need help with the Qvalue method. I have most of the code working but when calculating the Q value things are not right. I know if the arrow points towards an area where there is no state it needs to bounce back and if it points to the gray box it…
jwolf
  • 33
  • 3
1
vote
0 answers

What to do with Unknown Markov State

I have a state in my test dataset that did not exist in my training set (Therefore not in my transition matrix) How can I create a transition probability simplex of this unknown state or make assumptions on the closest state to use that is available…
1
vote
1 answer

Numpy Linalg on transition matrix

I have the following states states = [(0,2,3,0), (2,2,3,0), (2,2,2,0), (2,2,1,0)] In addition, I have the following transition matrix import pandas as pd transition_matrix = pd.DataFrame([[1, 0, 0, 0], [0.5, 0.3,…
HJA24
  • 410
  • 2
  • 11
  • 33