Questions tagged [markov-models]

Markov chain

The simplest Markov model is the Markov chain. It models the state of a system with a random variable that changes through time. In this context, the Markov property suggests that the distribution for this variable depends only on the distribution of the previous state. An example use of a Markov chain is Markov Chain Monte Carlo, which uses the Markov property to prove that a particular method for performing a random walk will sample from the joint distribution of a system.

Hidden Markov model

A hidden Markov model is a Markov chain for which the state is only partially observable. In other words, observations are related to the state of the system, but they are typically insufficient to precisely determine the state. Several well-known algorithms for hidden Markov models exist. For example, given a sequence of observations, the Viterbi algorithm will compute the most-likely corresponding sequence of states, the forward algorithm will compute the probability of the sequence of observations, and the Baum–Welch algorithm will estimate the starting probabilities, the transition function, and the observation function of a hidden Markov model. One common use is for speech recognition, where the observed data is the speech audio waveform and the hidden state is the spoken text. In this example, the Viterbi algorithm finds the most likely sequence of spoken words given the speech audio.

Markov decision process

A Markov decision process is a Markov chain in which state transitions depend on the current state and an action vector that is applied to the system. Typically, a Markov decision process is used to compute a policy of actions that will maximize some utility with respect to expected rewards. It is closely related to Reinforcement learning, and can be solved with value iteration and related methods.

Partially observable Markov decision process

A partially observable Markov decision process (POMDP) is a Markov decision process in which the state of the system is only partially observed. POMDPs are known to be NP complete, but recent approximation techniques have made them useful for a variety of applications, such as controlling simple agents or robots.

Markov random field

A Markov random field, or Markov network, may be considered to be a generalization of a Markov chain in multiple dimensions. In a Markov chain, state depends only on the previous state in time, whereas in a Markov random field, each state depends on its neighbors in any of multiple directions. A Markov random field may be visualized as a field or graph of random variables, where the distribution of each random variable depends on the neighboring variables with which it is connected. More specifically, the joint distribution for any random variable in the graph can be computed as the product of the "clique potentials" of all the cliques in the graph that contain that random variable. Modeling a problem as a Markov random field is useful because it implies that the joint distributions at each vertex in the graph may be computed in this manner.

Hierarchical Markov Models

Hierarchical Markov Models can be applied to categorize human behavior at various levels of abstraction. For example, a series of simple observations, such as a person's location in a room, can be interpreted to determine more complex information, such as in what task or activity the person is performing. Two kinds of Hierarchical Markov Models are the Hierarchical hidden Markov model and the Abstract Hidden Markov Model. Both have been used for behavior recognition, and certain conditional independence properties between different levels of abstraction in the model allow for faster learning and inference.

100 questions
3
votes
1 answer

What HMM (Hidden Markov Model) compression libraries available for .NET?

I am looking for a library that use Markov Models/Hidden Markov Models for data compression. I will need to use it from the .NET. I googled for MM/HMM compressors but didn't find any helpful reference (I can be be a bad googler though). Any…
oleksii
  • 35,458
  • 16
  • 93
  • 163
3
votes
1 answer

Input Output Hidden Markov Model Implementation in Python

I am trying to implement Hidden Markov Models with Input Output Architecture but I could not find any good python implementation for the same. Can anybody share the Python package the would consider the following implementation for HMM. Allow…
Rajat
  • 85
  • 2
  • 7
3
votes
0 answers

Building a time-inhomogeneous Markov chain in Python

using the search function did not help me to find a solution for my problem which is why I created this post. First of all, I am fairly new to Python and therefore, my knowledge is limited. I am analysing a data set, which is based on time use…
3
votes
1 answer

R Visualization of markov chains | change values in transition matrix by hand

I run a markov model in R, primaly to get the markov graph. I want to exclude all lines with a probability < 0,4 from transistion matrix (In this case the line from start to c2 should be deleted.). I tried this by setting these values to 0. But…
flozygy
  • 83
  • 2
  • 8
3
votes
2 answers

Applying hidden Markov model to multiple simultaneous bit sequences

This excellent article on implementing a Hidden Markov Model in C# does a fair job of classifying a single bit sequence based on training data. How to modify the algorithm, or build it out (multiple HMMs?) to support the classification of multiple…
Petrus Theron
  • 27,855
  • 36
  • 153
  • 287
3
votes
2 answers

Reinforcement learning And POMDP

I am trying to use Multi-Layer NN to implement probability function in Partially Observable Markov Process.. I thought inputs to the NN would be: current state, selected action, result state; The output is a probability in [0,1] (prob. that…
2
votes
0 answers

Transition probability with zero-frequency

With the below code I can calculate the Markov chain probabilities per time step. However I would like to make the following change to my code: a.) speed up the transitions: increase the number of each transition by one AND another change in my code…
Rstudent
  • 887
  • 4
  • 12
2
votes
2 answers

Why does my markov chain produce identical sentences from corpus?

I am using markovify markov chain generator in python and when using the example code given there it produces a lot of duplicate sentences for me and I don't know why. The code is as follows: import markovify # Get raw text as string. with…
2
votes
1 answer

How to use depmixS4 for classification?

I'm trying to use the depmix S4 package in r to classify stock price movements (1 for up, 0 for down). The top few rows of my data is below: Date Open High Low Close Adj.Close Volume Movement 01/12 …
2
votes
0 answers

Numpy: Raise diagonalizable square matrix to infinite power

Consider a Markovian process with a diagonalizable transition matrix A such that A=PDP^-1, where D is a diagonal matrix with eigenvalues of A, and P is a matrix whose columns are eigenvectors of A. To compute, for each state, the likelihood of…
Unis
  • 614
  • 1
  • 5
  • 17
2
votes
1 answer

What type is biopython 1.78 MarkovModel.train_visible() training_data?

I want to train a second-order Markov model for a nucleotide sequence using biopython's Bio.MarkovModel.train_visible(). That is, alphabet=["A","T","G","C"], states=["AA","AT","TT"...] However, I get an error: 474 states_indexes =…
makenzin
  • 35
  • 3
2
votes
1 answer

How to train a hidden markov model with constrained probabilities (or missing links between hidden states)?

I have a hidden Markov model (HMM) with 3 hidden states and 2 discrete emission symbols. I know that the probability of transitioning from state 2 to state 3 is 0 (i.e. there is no direct link from S2 to S3). What is the best way of fitting the…
2
votes
0 answers

R generate random sample using higher order markov chain

is there a way to generate a random sample from a higher order markov chain? I used the package clickstream to estimate a 2nd order markov chain and i'm now trying to generate a sample from it. I understand how to do this from a transition matrix…
chrisjacques
  • 635
  • 1
  • 5
  • 17
2
votes
0 answers

Is there an elegant and efficient way to implement weighted random choices in golang? Details on current implementation and issues inside

tl;dr: I'm looking for methods to implement a weighted random choice based on the relative magnitude of values (or functions of values) in an array in golang. Are there standard algorithms or recommendable packages for this? Is so how do they…
kapaw
  • 265
  • 1
  • 2
  • 11
2
votes
1 answer

Markov Model diagram directly from data (makovchain or deemod package?)

I want to read a bunch of factor data and create a transition matrix from it that I can visualise nicely. I found a very sweet package, called 'heemod' which, together with 'diagram' does a decent job. For my first quick-and-dirty approach, a ran a…
RalfB
  • 563
  • 1
  • 7
  • 22