Questions tagged [markov-chains]

Markov chains are systems which transition from one state to another based only upon their current state. They are used widely in various statistical domains to generate sequences based upon probabilities.

Markov chains (named after their creator, Andrey Markov) are systems which transition from one state to another based only upon their current state. They are memoryless processes which are semi-random, i.e. where each state change having an associated probability.

Due to their statical nature, Markov chains are suitable for simulating complex real-life processes where probabilities are well known. They are used in a wide variety of fields, with uses too in-depth to list here; an exhaustive list can be found on the associated Wikipedia page.

In programming, they are especially popular for manipulating human languages - Markov text generators are especially popular applications of Markov chains.

577 questions
4
votes
3 answers

Simplifying for-if messes with better structure?

Please, move this question to Code Review -area. It is better suited there because I know the code below is junk and I want critical feedback to complete rewrite. I am pretty much reinventing the wheel. # Description: you are given a bitwise pattern…
hhh
  • 50,788
  • 62
  • 179
  • 282
4
votes
2 answers

Best iterative way to calculate the fundamental matrix of an absorbing Markov Chain?

I have a very large absorbing Markov chain. I want to obtain the fundamental matrix of this chain to calculate the expected number of steps before absortion. From this question I know that this can be calculated by the equation (I - Q)t=1 which can…
4
votes
2 answers

Expected lifetime of the mouse in this Markov chain model

I was reading the cat and mouse Markov model on wikipedia, and decided to write some Julia code to empirically confirm the analytical results: P = [ 0 0 0.5 0 0.5 ; 0 0 1 0 0 ; 0.25 0.25 0 …
qed
  • 22,298
  • 21
  • 125
  • 196
4
votes
1 answer

Metropolis-Hastings algorithm in R: correct results?

My Metropolis-Hastings problem has a stationary binomial distribution, and all proposal distributions q(i,j) are 0.5. With reference to the plot and histogram, should the algorithm be so clearly centered around 0.5, the probability from the binomial…
G. Debailly
  • 1,121
  • 1
  • 10
  • 13
4
votes
1 answer

Creating three-state Markov chain plot

I have following dataframe with there states: angry, calm, and tired. The dataframe below provides individual cases of transition of one state into…
Oposum
  • 1,155
  • 3
  • 22
  • 38
4
votes
2 answers

find Markov steady state with left eigenvalues (using numpy or scipy)

I need to find the steady state of Markov models using the left eigenvectors of their transition matrices using some python code. It has already been established in this question that scipy.linalg.eig fails to provide actual left eigenvectors as…
Aaron Bramson
  • 1,176
  • 3
  • 20
  • 34
4
votes
1 answer

Draw markov chain given transition matrix in R

Let trans_m be a n by n transition matrix of a first-order markov chain. In my problem, n is large, say 10,000, and the matrix trans_m is a sparse matrix constructed from Matrix package. Otherwise, the size of trans_m would be huge. My goal is to…
semibruin
  • 963
  • 1
  • 9
  • 18
4
votes
1 answer

Metropolis-Hastings MCMC with R

I'm trying to implement a simple MCMC using MH algorith with R the problem, is that i get this error (i tried to calculate the alpha and it's not an NA problem) Error in if (runif(1) <= alpha) { : missing value where TRUE/FALSE needed here is my…
nidabdella
  • 811
  • 8
  • 24
4
votes
2 answers

Python numpy/scipy eigenvectors seemingly not correct for markov chain model

I have a large (351,351) numpy transition matrix. I would like to find the state state vector for this using numpy (I also tried scipy which has the same exact function). sstate = np.linalg.eig(T)[1][:,0] So this I believe should give me the…
4
votes
2 answers

Libraries or tools for generating random but realistic text

I'm looking for tools for generating random but realistic text. I've implemented a Markov Chain text generator myself and while the results were promising, my attempts at improving them haven't yielded any great successes. I'd be happy with tools…
Carl Summers
  • 495
  • 1
  • 4
  • 12
4
votes
1 answer

Given 100,000 word-to-phonemes mappings, how can I split the original words on the phoneme boundaries?

I have a mapping of 100,000+ words to their phonemes (CMUdict), like: ABANDONED => [ 'AH', 'B', 'AE', 'N', 'D', 'AH', 'N', 'D' ] I want to split the original words' letters into a number of groups equal to the number of phonemes, e.x. ABANDONED =>…
4
votes
2 answers

Left eigenvectors not giving correct (markov) stationary probability in scipy

Given the following Markov Matrix: import numpy, scipy.linalg A = numpy.array([[0.9, 0.1],[0.15, 0.85]]) The stationary probability exists and is equal to [.6, .4]. This is easy to verify by taking a large power of the matrix: B = A.copy() for _ in…
Hooked
  • 84,485
  • 43
  • 192
  • 261
3
votes
0 answers

building a Markov chain with spark

I'm working with a delta table containing log entries. Is there anything in Spark (PySpark) that would help me build a Markov chain from certain events derived from this table? If there aren't specific libraries for doing this, I would appreciate…
Dmitry B.
  • 9,107
  • 3
  • 43
  • 64
3
votes
2 answers

Repair Data for Markov Chain Monte Carlo Simulation

As is known all probabilities need to sum up to 1. I do have a Pandas Dataframe where sometimes the probabiltiy of one event does miss. Since I know all elements of a row need to sum up to one. I want to replace Nan by a calculated Value. With…
Hans Peter
  • 99
  • 8
3
votes
0 answers

Understanding how to calculate removal effects in a markov chain

I am currently trying to model a Marketing Multi-Channel Attribution. All the articles and the packages I have come across use a special "start" state and the removal effect is calculated based on that start state using the following matrix…