Questions tagged [markov-models]

Markov chain

The simplest Markov model is the Markov chain. It models the state of a system with a random variable that changes through time. In this context, the Markov property suggests that the distribution for this variable depends only on the distribution of the previous state. An example use of a Markov chain is Markov Chain Monte Carlo, which uses the Markov property to prove that a particular method for performing a random walk will sample from the joint distribution of a system.

Hidden Markov model

A hidden Markov model is a Markov chain for which the state is only partially observable. In other words, observations are related to the state of the system, but they are typically insufficient to precisely determine the state. Several well-known algorithms for hidden Markov models exist. For example, given a sequence of observations, the Viterbi algorithm will compute the most-likely corresponding sequence of states, the forward algorithm will compute the probability of the sequence of observations, and the Baum–Welch algorithm will estimate the starting probabilities, the transition function, and the observation function of a hidden Markov model. One common use is for speech recognition, where the observed data is the speech audio waveform and the hidden state is the spoken text. In this example, the Viterbi algorithm finds the most likely sequence of spoken words given the speech audio.

Markov decision process

A Markov decision process is a Markov chain in which state transitions depend on the current state and an action vector that is applied to the system. Typically, a Markov decision process is used to compute a policy of actions that will maximize some utility with respect to expected rewards. It is closely related to Reinforcement learning, and can be solved with value iteration and related methods.

Partially observable Markov decision process

A partially observable Markov decision process (POMDP) is a Markov decision process in which the state of the system is only partially observed. POMDPs are known to be NP complete, but recent approximation techniques have made them useful for a variety of applications, such as controlling simple agents or robots.

Markov random field

A Markov random field, or Markov network, may be considered to be a generalization of a Markov chain in multiple dimensions. In a Markov chain, state depends only on the previous state in time, whereas in a Markov random field, each state depends on its neighbors in any of multiple directions. A Markov random field may be visualized as a field or graph of random variables, where the distribution of each random variable depends on the neighboring variables with which it is connected. More specifically, the joint distribution for any random variable in the graph can be computed as the product of the "clique potentials" of all the cliques in the graph that contain that random variable. Modeling a problem as a Markov random field is useful because it implies that the joint distributions at each vertex in the graph may be computed in this manner.

Hierarchical Markov Models

Hierarchical Markov Models can be applied to categorize human behavior at various levels of abstraction. For example, a series of simple observations, such as a person's location in a room, can be interpreted to determine more complex information, such as in what task or activity the person is performing. Two kinds of Hierarchical Markov Models are the Hierarchical hidden Markov model and the Abstract Hidden Markov Model. Both have been used for behavior recognition, and certain conditional independence properties between different levels of abstraction in the model allow for faster learning and inference.

100 questions
0
votes
2 answers

regime switching multivariate garch

I have a regression with 4 independent variables and a dependent variable. I want to implement a Regime switching GARCH model but have been unable to find a package in R,Python or Matlab. MSGARCH package available in R is for uni-variate series…
pulsar
  • 1
  • 1
0
votes
1 answer

TML(Tractable Markov Logic) is a wonderful model! Why I haven't seen it being used for a wide of application scenarios of artificial intelligence?

I have been reading papers about the Markov model, suddenly a great extension like TML(Tractable Markov Logic) coming out. It is a subset of Markov logic, and uses probabilistic class and part hierarchies to control complexity. This model has…
user10379342
0
votes
1 answer

How to solve Markov transition rate matrix?

I have some variables to find like x= [1x16 (x1,x2,x3,....x16 variables)] with condition that x1+x2+x3+....x16=1. I have also 16x16 matrix Q= [16x16 (real values)]. I need to solve the equation 'x*Q=x' as shown here. How can I solve it in Matlab…
Abdullah1
  • 47
  • 5
0
votes
1 answer

MSGARCH package in R

In a "rugarch" package garch specification looks like this: ugarchspec(variance.model = list(model = "sGARCH", garchOrder = c(1, 1), submodel = NULL, external.regressors = NULL, variance.targeting = FALSE), mean.model = list(armaOrder = c(1, 1),…
0
votes
1 answer

Flink Markov Model Implementation

I want to implement Markov Model in Flink. Firstly I read data from Kafka. How I can implement trigram Markov model with flink?
0
votes
1 answer

Markov chain fit from many individual chains (in R)

Using the markovchain package, I'm working with a dataset comprised of six monthly observations for each of 23k individuals. When I go to fit a DTMC using the markovchainFit function, the function appears to want to take in what would be just one…
Andrew Cheesman
  • 140
  • 1
  • 10
0
votes
1 answer

How to use hmmlearn to classify English text?

I want to implement a classic Markov model problem: Train MM to learn English text patterns, and use that to detect English text vs. random strings. I decided to use hmmlearn so I don't have to write my own. However I am confused about how to train…
Superbest
  • 25,318
  • 14
  • 62
  • 134
0
votes
1 answer

Viterbi algorithm in linear time

I have a problem where given a Hidden Markov model and the states S I need to find an algorithm that returns the most probable path through the Hidden Markov model for a given sequence X in time O(|S|). I was thinking of developing a graph where I…
0
votes
1 answer

Compute n step-ahead state in markov model

I have a transition matrix of 40 states that gives transition probability to move from one state to another. How can we compute that after n time-steps the system will be in what state.
user6460588
  • 144
  • 1
  • 10
0
votes
1 answer

Continuous-time finite-horizon MDP

Is there any algorithm for solving a finite-horizon semi-Markov-Decision-Process? I want to find the optimal policy for a sequential decision problem with a finite action space, a finite state space, and a deadline. Critically, different actions…
0
votes
1 answer

Expectation vs. direct numerical optimization of likelihood function for estimating high-dimensional Markov-Switching /HMM model

I am currently estimating a Markov-switching model with many parameters using direct optimization of the log likelihood function (through the forward-backward algorithm). I do the numerical optimization using matlab's genetic algorithm, since other…
0
votes
1 answer

proof of each row of self product of transition matrix sums to 1

I am unable to proof that the sum of each row of self product of a transition matrix is 1... Let A be a transition probability matrix which means that each row of A sums to 1, and let P=A*A. I want the prove that P is a also a valid transition…
0
votes
1 answer

multiple transitions of Markov model

I'm trying to run a data frame through multiple transitions of a Markov model for a class. The data frame looks like this: df = pd.DataFrame({'Bull Market': [.9, .8, .5], 'Bear Market': [.25, .05, .25], …
0
votes
1 answer

Markov chain - Likelihood of sample with "unseen" observations (probability 0)

I have a large Markov chain and a sample, for which I want to calculate the likelihood. The problem is that some obervations or transitions in the sample don't occur in the Markov chain, which makes the total likelihood 0 (or the log-likelihood -…
0
votes
0 answers

Fitting of a Markov Switching Model

I'm using the package fMarkovSwitching in R to do what I was trying to do here: Fitting Markov Switching Models to data in R. However, I get another weird error message. I'm trying to replicate the example at page 12 (using my time series of log…
Egodym
  • 453
  • 1
  • 8
  • 23