Questions tagged [markov]

Markov, or markov property refers to the memoryless property of a stochastic process.

Overview

From Wikipedia,

A stochastic process has the Markov property if the conditional probability distribution of future states of the process depends only upon the present state, not on the sequence of events that preceded it.

A Markov process (Yt) can be expressed like below:

enter image description here

Tag usage

Please consider stack-exchange's Cross Validated SE for asking questions concerning statistics data analysis.

255 questions
2
votes
1 answer

R Simulation Of Markov Chain

data1=data.frame("group"=c(1,2,3,4,5), "t11"=c(0.01,0.32,0.25,0.37,0.11), "t12"=c(0.48,0.45,0.61,0.29,0.23), "t13"=c(0.51,0.23,0.14,0.3,0.67), "t22"=c(0.13,0.91,0.41,0.69,0.42), "t23"=c(0.87,0.09,0.59,0.31,0.58)) set.seed(1) …
bvowe
  • 3,004
  • 3
  • 16
  • 33
2
votes
0 answers

Q-Learning policy doesn't agree with Value/Policy Iteration

I am playing with pymdptoolbox. It has a built-in problem of forest management. It can generate a transition matrix P and R by specifying a state value for forest function (default value is 3). The implementation of Q-Learning, PolicyIteration and…
Chenyang
  • 161
  • 1
  • 11
2
votes
0 answers

Is there an elegant and efficient way to implement weighted random choices in golang? Details on current implementation and issues inside

tl;dr: I'm looking for methods to implement a weighted random choice based on the relative magnitude of values (or functions of values) in an array in golang. Are there standard algorithms or recommendable packages for this? Is so how do they…
kapaw
  • 265
  • 1
  • 2
  • 11
2
votes
1 answer

Markov Transition Probability Matrix Implementation in Python

I am trying to calculate one-step, two-step transition probability matrices for a sequence as shown below : sample = [1,1,2,2,1,3,2,1,2,3,1,2,3,1,2,3,1,2,1,2] import numpy as np def onestep_transition_matrix(transitions): n = 3 #number of…
2
votes
2 answers

Python: Running this code from terminal

I have this code which is meant to generate text via Markov chains/processes. It compiles fine with no errors and runs on terminal with no errors but doesn't generate any response/return? I do this by going into the directory were the Markov.py…
MD9
  • 105
  • 2
  • 14
2
votes
0 answers

R 'msm' Package for Markov model in discrete time

I'm trying to build a model on loan delinquency status (having 6 states) and transitions over a time period of one month. I want to use a Markov model to predict transition probabilities and also want to add covariates to the probabilities. But i'm…
ANP
  • 51
  • 1
  • 9
2
votes
1 answer

How to find markov blanket for a node?

I want to do feature selection using markov blanket algorithm. I am wondering is there any API in java/weka or in python to find the markov blanket . Consider I have a dataset. The dataset has number of variables and one one target variable. I want…
Rashida Hasan
  • 149
  • 3
  • 13
2
votes
0 answers

How can we test the stationarity and homogeneity of Markov chain using likelihood ratio or chi-square?

I found the R functions below in Markov_chain package and I applied that code for my dataset to investigate the stationarity, order and homogeneity of Markov chain using chi-square test but when I execute the code I got a warning that Chi-squared…
yousif
  • 23
  • 1
  • 4
2
votes
1 answer

(Python) Markov, Chebyshev, Chernoff upper bound functions

I'm stuck with one task on my learning path. For the binomial distribution X∼Bp,n with mean μ=np and variance σ**2=np(1−p), we would like to upper bound the probability P(X≥c⋅μ) for c≥1. Three bounds introduced: Formulas The task is to write…
2
votes
1 answer

How to calculate the probability mass function of a random variable modulo N, where N is a prime number?

I'm trying to solve the following math problem: A knight in standard international chess is sitting on a board as follows 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 The knight starts on square "0" and makes jumps to other squares according to…
Kurt Peek
  • 52,165
  • 91
  • 301
  • 526
2
votes
1 answer

Are Functional Programming & Markov Chains related somehow?

David Silver describes a property of Markov Chains as: The future is independent of the past given the present https://www.youtube.com/watch?v=lfHX2hHRMVQ (4 mins into video) This struck a chord with me because I am currently learning about…
2
votes
0 answers

Using a Markov Model to analyze a text input in Java

I'm very new to Java and I'm required to use a Markov Model to analyze a text (String) input. I will be honest: this is for an assignment. I am looking to learn how to answer it, not just copy and paste code. The code I am working with (this is…
K Lee
  • 21
  • 1
2
votes
3 answers

Basic Hidden Markov Model, Viterbi algorithm

I am fairly new to Hidden Markov Models and I am trying to wrap my head around a pretty basic part of the theory. I would like to use a HMM as a classifier, so, given a time series of data I have two classes: background and signal. How are the…
dan burke
  • 53
  • 7
2
votes
1 answer

How can I obtain stationary distribution of a Markov Chain given a transition probability matrix

I'm trying to write mpow(P, 18) in vector form & matrix form. Can anyone help me with that? Also, I'm trying to find the stationary distribution of each state. Pi_0 = ? Pi_1 = ? Pi_2 = ? ... Pi_5 = ? Here is the code I've written: P <- matrix(c(0,…
PeterNiklas
  • 75
  • 1
  • 9
2
votes
0 answers

Simulate an artificial state-change sequence from a fitted semi-Markov model in R

I have a sequence of behavioural states (for a single moving animal), each with an associated duration, and am interested in producing a synthetic state sequence that preserves the properties of the original (particularly, the state-change…
Tom
  • 151
  • 9