Questions tagged [markov-chains]

Markov chains are systems which transition from one state to another based only upon their current state. They are used widely in various statistical domains to generate sequences based upon probabilities.

Markov chains (named after their creator, Andrey Markov) are systems which transition from one state to another based only upon their current state. They are memoryless processes which are semi-random, i.e. where each state change having an associated probability.

Due to their statical nature, Markov chains are suitable for simulating complex real-life processes where probabilities are well known. They are used in a wide variety of fields, with uses too in-depth to list here; an exhaustive list can be found on the associated Wikipedia page.

In programming, they are especially popular for manipulating human languages - Markov text generators are especially popular applications of Markov chains.

577 questions
-1
votes
1 answer

Question about constructing transition matrix for a scenario

I am facing some problem on constructing transition probability matrix when I am studying and following is the scenario of the question: **Assuming a phone has had i faults (for i = 0,1,2,3 the probability of having another fault is p, independently…
Ben
  • 1
  • 1
-1
votes
2 answers

Finding conditional and joint probabilities from a simulation

Consider the Markov chain with state space S = {1, 2} and transition matrix and initial distribution α = (1/2, 1/2). Suppose, the source code for simulation is the following: alpha <- c(1, 1) / 2 mat <- matrix(c(1 / 2, 0, 1 / 2, 1), nrow = 2, ncol…
user366312
  • 16,949
  • 65
  • 235
  • 452
-1
votes
1 answer

Create a markov chain object from transition matrix

I have a transition matrix with 4 rows and 16 columns, containing the probability to transition from one Wikipedia article to another. The sum of my rows have been normalised to equal 1. Here is an…
c.vdr
  • 9
  • 2
-1
votes
2 answers

Markov chain generator

The generator should take a starting point (an integer). With each pass of the resultant generator object to next, a random step from the last point returned (or the starting point if no point has yet been returned) should be performed. The result…
roadrunner
  • 45
  • 6
-1
votes
1 answer

How to implement this EigenVector Python Code in Java

I have found the right result of an Algorithm. Can I implement this code in Java? import numpy as np from scipy.linalg import eig transition_mat = np.matrix([ [0.8, 0.15,0.05 ],\ [0.075,0.85,0.075],\ [0.05, 0.15,0.8 ]]) S, U =…
Fascal
  • 1
  • 1
-1
votes
1 answer

Markov chain in R

Suppose we have a ten state system where an observation can enter the system in any one of the ten states with equal probability and move from the given state into a new state also with equal probability (the observation's new state isn't…
-1
votes
1 answer

Markov Transition Probability Matrix

I am having trouble in calculating transition probability matrix. I have a couple of ids and their search pattern (page visited). Example: Id Page 1 A 1 A 1 B 2 C 2 C 3 D 3 E 3 F 1 D 1 G 4 G 4 C 4 H 2 D 2 C I also…
Meheli
  • 1
  • 2
-1
votes
1 answer

Markov chain (c code on windows fails)

I am trying to get work this piece of code from Kernighan's book The practice of programming on my workstation(windows 7 + vs2015 community edition) I get a strange error. void generate(int nwords) { State *sp; Suffix *suf; char…
-1
votes
1 answer

DFA for expected coin tosses

I am trying to construct this problem: A fair coin is tossed until two heads appear in a row. What is the expected number of coin tosses? Design a DFA for the language L+ {w|w has 11 as a substring} Use this DFA as a Markov chain to calculate the…
Zach
  • 119
  • 3
  • 17
-1
votes
1 answer

random walk based on previous moves

I am pretty new to this area, so the question that I ask might be straight forward or look naive for other professionals. For a 1D random walk problems, such as drunkard's walk problem, there is no connection between the current move and the…
Lexus00
  • 3
  • 1
-1
votes
1 answer

How to find Finite State-Transition probability matrix of Markov chain (FSMC)

I have channel measurements which has values > 20,000, which has to be divided into discrete levels, as in my case K=8 and which has to be mapped to channel measurements with states. I have to find state-transition probability matrix for this in…
Ram Sundar
  • 13
  • 1
  • 6
-1
votes
1 answer

How to calculate the transition probability matrix of a second order Markov Chain

I have data like in form of this Broker.Position IP BP SP IP IP .. I would like to calculate the second order transition matrix like in this form BP IP SP BPBP SPSP IPIP BPSP SPBP IPSP SPIP BPIP IPBP
Rup Mitra
  • 41
  • 1
  • 2
  • 4
-1
votes
2 answers

index exceeding matrix dimentions

I was trying to get this low order recursive function in matlab. i want to calculate the probability of status of a site at next time step, given that I have the initial probability of that being a status. P= Probability x= status(0,1) Dij=…
happyme
  • 233
  • 2
  • 5
  • 16
-2
votes
2 answers

Markov chains in R

I'm using the markovchain package in R and the function mc<-markovchainFit(data) I have a propablity matrix mc$estimate and I want to round the propabilities. How do I do that? Another question: How I can write that matrix to text file or Excel? I…
-2
votes
1 answer

Gibbs Sampling Code

Does any one here have implemented Gibbs sampling using some test. I have to implement Gibbs sampling but i have problems in it in nailing it down to implementation level. ----How and from where to choose test data? ----How to create Bayesian…
Madu
  • 4,849
  • 9
  • 44
  • 78
1 2 3
38
39