Questions tagged [markov]

Markov, or markov property refers to the memoryless property of a stochastic process.

Overview

From Wikipedia,

A stochastic process has the Markov property if the conditional probability distribution of future states of the process depends only upon the present state, not on the sequence of events that preceded it.

A Markov process (Yt) can be expressed like below:

enter image description here

Tag usage

Please consider stack-exchange's Cross Validated SE for asking questions concerning statistics data analysis.

255 questions
1
vote
2 answers

Transition matrix of absorbing higher order markov chain

I have a absorbing Markov chain, let say I have the states s={START, S1, S2, END1, END2} The state Start will always be the starting point of the chain, however it will not be possible to return to this states once you leave it. I am curious of how…
Developer
  • 917
  • 2
  • 9
  • 25
1
vote
2 answers

Building markov chain in r

I have a text in a column and i would like to build a markov chain. I was wondering of there is a way to build markov chain for states A, B,C, D and generate a markov chain with that states. Any thoughts? A<- c('A-B-C-D', 'A-B-C-A', 'A-B-A-B')
user3570187
  • 1,743
  • 3
  • 17
  • 34
1
vote
1 answer

Use the markovchain package to compare two empirically estimated Markov chains

I need to compare two probability matrices to know the degree of proximity of the chains, so I would use the resulting P-Value of the test. I tried to use the markovchain r package, more specifically the divergenceTest function. But, the problem is…
1
vote
0 answers

How to choose a reward function for an optimization in Reinforcement Learning?

I am working on a sequential decision making process, where a battery controller, given the renewable energy for a state, should follow an optimal policy that minimizes a global objective ( minimize costs of power purchased from the grid ).…
1
vote
0 answers

Birth Death Process Uncompleted code

I've have a Birth & Death process code problem. I have 4 states S = {0,1,2,3} In state 0, there are no customers. In state 1, there is 1 customer being treated. In state 2, there is 1 customer being treated +1 in queue. In state 3, there is 1…
PeterNiklas
  • 75
  • 1
  • 9
1
vote
0 answers

How to fit a HMM with two hidden states to sequences with R

In R, I need to fit an HMM with two hidden states to a set of sequences. There are 2 classes, with sets of sequences for each. I need to find out which class a certain test sequence is from, and what the hidden state sequence is for it. Here are…
user1323104
  • 11
  • 1
  • 1
1
vote
2 answers

Markov decision process: same action leading to different states

Last week I've read a paper suggesting MDP as an alternative solution for recommender systems, The core of that paper was representation of recommendation process in terms of MDP, i.e. states, actions, transition probabilities, reward function and…
mangusta
  • 3,470
  • 5
  • 24
  • 47
1
vote
1 answer

How can I implement Markov's algorithm with variables and markers?

I've been trying to implement Markov's algorithm, but I've only had partial success. The algorithm is fairly simple and can be found here. However, my project has an added difficulty, I have to use rules that include markers and variables. A…
Riccardo
  • 383
  • 5
  • 16
1
vote
0 answers

Maximum Likelihood - Estimating Number of maximas

I'm training a Hidden Markov Model using EM, and want to get some estimation of how "certain" I can be about the learned parameters (i.e- the estimated transition, emission, and prior probabilities). In general, different initial conditions result…
David U
  • 11
  • 4
1
vote
2 answers

Markov chain: join states in Transition Matrix

I need to merge two state in transition Matrix: For example: i have the matrix below A B C D E F A 0.5 0.4 0 0 0.1 0 B 0.5 0.1 0.2 0.1 0.1 0 …
CVec
  • 118
  • 8
1
vote
1 answer

Reinforcement learning with neural networks

I am working on a project with RL & NN I need to determine the action vector structure which will be fed to a neural network.. I have 3 different actions (A & B & Nothing) each with different powers (e.g A100 A50 B100 B50) I wonder what is the …
Betamoo
  • 14,964
  • 25
  • 75
  • 109
1
vote
0 answers

Creating probability matrix from a DocumentTermMatrix

I'm an economist and now I'm analysing some qualitative and text data. This is new for me. I want to create a Markov Model for text predicton based on my interviews corpora. I have analyzed a corpora with tm package and after creating a…
JosePerles
  • 113
  • 1
  • 7
1
vote
2 answers

Partially Observable Markov Decision Process Optimal Value function

I understood how belief states are updated in POMDP. But in Policy and Value function section, in http://en.wikipedia.org/wiki/Partially_observable_Markov_decision_process I could not figure out how to calculate value of V*(T(b,a,o)) for finding…
1
vote
1 answer

Count the probability in a cell array

Hei, I have a cell array, the second column is the times of 'XX->XX', for example: 'AA->AA' [21] [4.2084] 'AA->AC' [15] [3.0060] 'AA->AG' [ 9] [1.8036] 'AA->AT' [12] [2.4048] 'AC->CA' [14] [2.8056] 'AC->CC' [16] …
Jack2007
  • 59
  • 1
  • 6
1
vote
1 answer

Algorithm to Generate Transition Matrix

Transition Probability is given as e.g., for one product, when the current price is High, the probability of next period being High Price is 0.3, and being Low price is 0.7. My question is that for two independent products, what is the transition…
Titanic
  • 557
  • 1
  • 8
  • 21