Questions tagged [markov]

Markov, or markov property refers to the memoryless property of a stochastic process.

Overview

From Wikipedia,

A stochastic process has the Markov property if the conditional probability distribution of future states of the process depends only upon the present state, not on the sequence of events that preceded it.

A Markov process (Yt) can be expressed like below:

enter image description here

Tag usage

Please consider stack-exchange's Cross Validated SE for asking questions concerning statistics data analysis.

255 questions
7
votes
3 answers

Markov decision process' questions

alt text http://img693.imageshack.us/img693/724/markov.png I'm a bit confused about some points here: What does it mean to say that it will be successful 70% of the time he tries a given action? Does it mean that every time he tries to perform an…
devoured elysium
  • 101,373
  • 131
  • 340
  • 557
6
votes
1 answer

Building a more realistic random word generator?

I'm seen many examples of using Markov chains for generating random words based on source data, but they often seem a bit overly mechanical and abstract to me. I'm trying to develop a better one. I believe part of the problem is that they rely…
5
votes
1 answer

Channel Attribution (Markov Chain Model) in Python

How to do Channel Attribution (Markov Chain Model) in Python? Like we have 'ChannelAttribution' package in R.
Shankar Kanap
  • 51
  • 1
  • 3
5
votes
2 answers

Markov Chains and decimal points in r?

I have plotted a markov chain from a matrix in r. However, I have numerous probabilities under 0.01, and thus my probability plot looks something like this: I've been searching for hours and I can't seem to find something that allows me to display…
Megan
  • 61
  • 1
5
votes
1 answer

Understanding The Value Iteration Algorithm of Markov Decision Processes

In learning about MDP's I am having trouble with value iteration. Conceptually this example is very simple and makes sense: If you have a 6 sided dice, and you roll a 4 or a 5 or a 6 you keep that amount in $ but if you roll a 1 or a 2 or a 3 you…
Sam Hammamy
  • 10,819
  • 10
  • 56
  • 94
5
votes
2 answers

How do Markov Chains work and what is memorylessness?

How do Markov Chains work? I have read wikipedia for Markov Chain, But the thing I don't get is memorylessness. Memorylessness states that: The next state depends only on the current state and not on the sequence of events that preceded it. If…
unknown
  • 4,859
  • 10
  • 44
  • 62
4
votes
1 answer

How to implement Hidden Markov Model on multiple columns?

I'm having trouble implementing a HMM model. I'm starting with a pandas dataframe where I want to use two columns to predict the hidden state. I'm using the hmmlearn package. I'm following the instructions for hmmlearn 'Working with multiple…
4
votes
2 answers

Steady State Probabilities (Markov Chain) Python Implementation

Hi I am trying to generate steady state probabilities for a transition probability matrix. Here is the code I am using: import numpy as np one_step_transition = np.array([[0.125 , 0.42857143, 0.75 ], [0.75 …
4
votes
1 answer

R Markovchain package - fitting the markov chain based on the states sequence matrix

I was trying to use the R markovchain package. I have a question regarding the markovchainFit function and the sequence matrix. By default the markovchainFit function is run with the sequence of states as the parameter. Then it is said in the…
Makavelli
  • 41
  • 4
4
votes
1 answer

Markov Chains, random text based on probability. Java

I'm trying to generate a string of 140 Characters based on probabilities of repetition from an input text. I already have an array with each valid character and in a different array probability of each char. char[] array = [a, b, c, ...] double[]…
Alan
  • 361
  • 3
  • 22
4
votes
1 answer

PyMC: Parameter estimation in a Markov system

A Simple Markov Chain Let's say we want to estimate parameters of a system such that we can predict the state of the system at time step t+1 given the state at timestep t. PyMC should be able to deal with this easily. Let our toy system consist of a…
Stefan
  • 1,246
  • 1
  • 9
  • 13
4
votes
1 answer

Cleaning up this PHP Markov Chain output?

This is my first time working with Markov chains. I want to combine two sources of text and get a readable Markov Chain. The implementation I'm using is here - the sources of text are stripped of markup, etc. I was first exposed to Markov Chains…
andyhky
  • 1,798
  • 1
  • 17
  • 30
3
votes
1 answer

Replacing for loop across a list of same-dimensional matrices for more efficiency

I have a list of matrices of identical dimension. Across all matrices most values are zero. Those values non-zero vary according to the position of the matrix in the list, and the values for replacement are stored in vectors of the same length as…
Thomas
  • 31
  • 4
3
votes
2 answers

Is there an R package to calculate 1st order transition matrix from a frequency table?

I have a frequency table aggregated from 800 millions of records and am wondering if I can use a package to calculate 1st order transition matrix from the frequency table, which is not symmetric because some state just never happened again. A sample…
smz
  • 263
  • 4
  • 11
3
votes
1 answer

Can finite state machines with conditional transitions be expressed as Markov chains?

I'd be curious to know whether finite state machines that have conditional transitions can be expressed as Markov chains? If they can't, what would be a good counterexample?
Szemeredi_31
  • 131
  • 2
1
2
3
16 17