Questions tagged [markov-chains]

Markov chains are systems which transition from one state to another based only upon their current state. They are used widely in various statistical domains to generate sequences based upon probabilities.

Markov chains (named after their creator, Andrey Markov) are systems which transition from one state to another based only upon their current state. They are memoryless processes which are semi-random, i.e. where each state change having an associated probability.

Due to their statical nature, Markov chains are suitable for simulating complex real-life processes where probabilities are well known. They are used in a wide variety of fields, with uses too in-depth to list here; an exhaustive list can be found on the associated Wikipedia page.

In programming, they are especially popular for manipulating human languages - Markov text generators are especially popular applications of Markov chains.

577 questions
0
votes
1 answer

How to do MCMC simulation using Metropolis hasting algorithm in Matlab?

I am trying to simulate a distribution for parameter theta f= theta ^(z_f+n+alpha-1)*(1-theta)^(n+1-z_f-k+ beta-1), where all the parameter except for theta is know. I am using Metro polish hasting algorithm to do the MCMC simulation . My proposal…
Spandyie
  • 914
  • 2
  • 11
  • 23
0
votes
1 answer

Markov chain probability calculation - Python

I have a Python dictionary with state transition probabilities of a Markov-chain model. dict_m = {('E', 'F'): 0.29032258064516131, ('D', 'F'): 0.39726027397260272, ('D', 'D'): 0.30136986301369861, ('E', 'D'): 0.32258064516129031, ('E', 'E'):…
Nilani Algiriyage
  • 32,876
  • 32
  • 87
  • 121
0
votes
1 answer

r/ msm-package/ how to fit and get discrete time, time-homogenous transition probabilities?

I have a sequence of states and corresponding months. mcdata <- structure(list(state = structure(c(2L, 1L, 2L, 2L, 2L, 2L, 4L, 4L, 2L, 4L, 2L, 3L, 1L, 3L, 2L, 2L, 2L, 4L, 2L, 3L, 4L, 2L, 3L, 3L, 3L, 3L, 3L, 1L, 4L, 2L, 3L, 2L, 2L, 4L, 3L, 2L, 4L,…
user2968765
  • 145
  • 1
  • 10
0
votes
2 answers

Improving on the efficiency of randsample in MATLAB for a Markov chain simulation.

I am using matlab to simulate an accumulation process with several random walks that accumulate towards threshold in parallel. To select which random walk will increase at time t, randsample is used. If the vector V represents the active random…
skleene
  • 389
  • 3
  • 13
0
votes
1 answer

Modeling Shocks to a Maximization in R

I am currently trying to write a code that will solve the consumption path over a 100x100 state space, subject to possible shocks in production. I currently have ###################################Part…
0
votes
1 answer

How to solve discrete time Markov Chains in Sage in a short way

I'm new to Sage. I'm able to solve DTMC on Octave by using this short code: a = 0.2 s = 0.6 P = [ (1-a)*(1-a), (1-a)*a, a*(1-a), a*a; (1-a)*s, (1-a)*(1-s), a*s, a*(1-s); s*(1-a), s*a, (1-s)*(1-a), (1-s)*a; 0, …
Albert Vonpupp
  • 4,557
  • 1
  • 17
  • 20
0
votes
2 answers

Markov Chain Monte Carlo, proposal distribution for multivariate Bernoulli distribution?

Is there a suitable proposal distribution for multivariate Bernoulli model ? for example I want to sample from a probability distribution p(x) = p*(x) / Z; where x = {0,1}^M and Z is the normalization constant, Which is intractable to directly…
Jing
  • 895
  • 6
  • 14
  • 38
0
votes
0 answers

MCMC Sampling / Gibbs Sampling

Had a midterm in my Artificial Intelligence class on MCMC sampling (is it the same as Gibbs sampling?). I was looking over the solution which I found online (in my midterm it was called MCMC liklihood weighting sampler, but in the attached solution…
0
votes
1 answer

How to visualize a state transition diagram in JUNG(Java Universal Network/Graph Framework)?

I am stuck with the visualization part, I have created a DirectedSparseMultiGraph for the purpose of visualizing the following transition diagram. I want to draw it in the same manner as depicted in the image. At the moment I am getting this. I…
0
votes
1 answer

'if else' statement to find the state matrix based on sample from uniform distribution in R

I have draw a sample u from random variable u~uniform(0,1) ; set.seed(123) num_samples <- 5 #number of samples num_time_periods <- 5 # number of years sample_u <- array(0,c(num_samples,num_time_periods)) for(i in 2:num_time_periods){ …
NSAA
  • 175
  • 1
  • 3
  • 14
0
votes
1 answer

Solving a system of equations to find expected residence time of a Markov Chain

I have been told that in order to calculate the expected residence time for a set of states I can use the following approach: Construct a Markov Chain with index i,j being the probability of transition from state i to state j. Transpose the matrix,…
Mads T
  • 530
  • 3
  • 14
0
votes
0 answers

Cheating Absorbing Markov Chains in R

I am building a Lineup simulator that uses absorbing markov chains to simulate the number of runs that a certain lineup would score. There is a different transition matrix for each different of the 9 players in the lineup and one games is simulated…
BaseballR
  • 147
  • 2
  • 12
0
votes
1 answer

Convergence of value iteration

Why the termination condition of value-iteration algorithm ( example http://aima-java.googlecode.com/svn/trunk/aima-core/src/main/java/aima/core/probability/mdp/search/ValueIteration.java ) In the MDP (Markov Desicion Process) is ||Ui+1-Ui||<…
0
votes
1 answer

Generate a new text using the style of one text and the nouns/verbs of another?

I want to generate plausible (or less than plausible is okay too) nonsense text similar to the way that a markov chain approach would do, but I want the nouns and verbs of the generated text to come from a different source than the analyzed text.…
mix
  • 6,943
  • 15
  • 61
  • 90
0
votes
2 answers

Inexact power of matrix in MATLAB

As I was bored, I checked the stationary theorem regrading the transition matrix of a MARKOV chain. So I defined a simple one, e.g.: >> T=[0.5 0.5 0; 0.5 0 0.5; 0.2 0.4 0.4]; The stationary theorem says, if you calculate the transitionmatrix to a…
Tik0
  • 2,499
  • 4
  • 35
  • 50