Questions tagged [markov-models]

Markov chain

The simplest Markov model is the Markov chain. It models the state of a system with a random variable that changes through time. In this context, the Markov property suggests that the distribution for this variable depends only on the distribution of the previous state. An example use of a Markov chain is Markov Chain Monte Carlo, which uses the Markov property to prove that a particular method for performing a random walk will sample from the joint distribution of a system.

Hidden Markov model

A hidden Markov model is a Markov chain for which the state is only partially observable. In other words, observations are related to the state of the system, but they are typically insufficient to precisely determine the state. Several well-known algorithms for hidden Markov models exist. For example, given a sequence of observations, the Viterbi algorithm will compute the most-likely corresponding sequence of states, the forward algorithm will compute the probability of the sequence of observations, and the Baum–Welch algorithm will estimate the starting probabilities, the transition function, and the observation function of a hidden Markov model. One common use is for speech recognition, where the observed data is the speech audio waveform and the hidden state is the spoken text. In this example, the Viterbi algorithm finds the most likely sequence of spoken words given the speech audio.

Markov decision process

A Markov decision process is a Markov chain in which state transitions depend on the current state and an action vector that is applied to the system. Typically, a Markov decision process is used to compute a policy of actions that will maximize some utility with respect to expected rewards. It is closely related to Reinforcement learning, and can be solved with value iteration and related methods.

Partially observable Markov decision process

A partially observable Markov decision process (POMDP) is a Markov decision process in which the state of the system is only partially observed. POMDPs are known to be NP complete, but recent approximation techniques have made them useful for a variety of applications, such as controlling simple agents or robots.

Markov random field

A Markov random field, or Markov network, may be considered to be a generalization of a Markov chain in multiple dimensions. In a Markov chain, state depends only on the previous state in time, whereas in a Markov random field, each state depends on its neighbors in any of multiple directions. A Markov random field may be visualized as a field or graph of random variables, where the distribution of each random variable depends on the neighboring variables with which it is connected. More specifically, the joint distribution for any random variable in the graph can be computed as the product of the "clique potentials" of all the cliques in the graph that contain that random variable. Modeling a problem as a Markov random field is useful because it implies that the joint distributions at each vertex in the graph may be computed in this manner.

Hierarchical Markov Models

Hierarchical Markov Models can be applied to categorize human behavior at various levels of abstraction. For example, a series of simple observations, such as a person's location in a room, can be interpreted to determine more complex information, such as in what task or activity the person is performing. Two kinds of Hierarchical Markov Models are the Hierarchical hidden Markov model and the Abstract Hidden Markov Model. Both have been used for behavior recognition, and certain conditional independence properties between different levels of abstraction in the model allow for faster learning and inference.

100 questions
1
vote
0 answers

model to evaluate disease worsening/improvement after a therapy change

I have some experience with regression analysis, cox models etc., but I would like to ask which model would suite best to evaluate a chronic disease symptomatic control associated with different drug changes. Suppose you have a time-series data for…
cccnrc
  • 1,195
  • 11
  • 27
1
vote
1 answer

Extended Raftery Markov Chain function minimization using python

I am working on extended Raftery's model which is a more general higher-order Markov chain model, in that I need to solve the following Linear Programming model with certain constraints. Following is the (link) Linear Programming function that needs…
1
vote
0 answers

How to refine the Graphcut cmex code based on a specific energy functions?

I download the following graph-cut code: https://github.com/shaibagon/GCMex I compiled the mex files, and ran it for pre-defined image in the code (which is rgb image) I wanna optimize the image segmentation results, I have probability map of the…
1
vote
1 answer

What is the meaning of Values row in POMDP?

I am studying POMDP file format and fallowing this and many other links. I have understood everything but I can't get what does the Value in second row of the file stand for. Its values are Reward or Cost. Can't find the answer elsewhere. Getting…
Oskars
  • 407
  • 4
  • 24
1
vote
0 answers

Markov-Swithing GARCH models and parallel in R

First time asking a question here, I'll do my best to be explicit - but let me know if I should provide more info! I'm currently working with "MSGARCH" package in R (version 3.3.3). I'm trying to caclulate rolling VaR for 288 MS-GARCH models, but…
1
vote
2 answers

Transition matrix of absorbing higher order markov chain

I have a absorbing Markov chain, let say I have the states s={START, S1, S2, END1, END2} The state Start will always be the starting point of the chain, however it will not be possible to return to this states once you leave it. I am curious of how…
Developer
  • 917
  • 2
  • 9
  • 25
1
vote
1 answer

Markov Switching Model in Python Statsmodels

I would like to estimate a Markov Switching Model as done in the following: http://www.chadfulton.com/posts/mar_hamilton.html However, when I try to import the function to fit the model, i.e. from statsmodels.tsa.mar_model import MAR I get the…
1
vote
2 answers

Markov decision process: same action leading to different states

Last week I've read a paper suggesting MDP as an alternative solution for recommender systems, The core of that paper was representation of recommendation process in terms of MDP, i.e. states, actions, transition probabilities, reward function and…
mangusta
  • 3,470
  • 5
  • 24
  • 47
1
vote
2 answers

Fitting Markov Switching Models to data in R

I'm trying to fit two kinds of Markov Switching Models to a time series of log-returns using the package MSwM in R. The models I'm considering are a regression model with only an intercept, and an AR(1) model. Here is the code I'm…
Egodym
  • 453
  • 1
  • 8
  • 23
1
vote
1 answer

Update Rule in Temporal difference

The update rule TD(0) Q-Learning: Q(t-1) = (1-alpha) * Q(t-1) + (alpha) * (Reward(t-1) + gamma* Max( Q(t) ) ) Then take either the current best action (to optimize) or a random action (to explorer) Where MaxNextQ is the maximum Q that can be got in…
1
vote
1 answer

Log likelihood of a markov network

I am having trouble understanding the following figure from Coursera class: From as far as I understand, the equation corresponds the factor table: And therefore the likelihood of a sample data (a = 0, b=0, c=1) for example would be: It doesn't…
Dzung Nguyen
  • 3,794
  • 9
  • 48
  • 86
1
vote
2 answers

Computing Eigenvalues/Eigenvectors of a stochastic matrix

I have difficulties to determine the stationary distribution of a markov model. I start to understand the theory and connections: Given a stochastic matrix, to dermine the stationary distribution we need to find the eigen vector for the largest…
Drey
  • 3,314
  • 2
  • 21
  • 26
1
vote
1 answer

SPSS Syntax - How to deal with missing values through SPSS Syntax

Im new in this forum. I have to do a presentation on how SPSS deals with missing values. Specificaly, our professor gave us the task to: 1) Find out if, besides the functions accesible through the menus, there are functions accesible via SPSS…
1
vote
1 answer

Degree of Freedom of Markov Chains

I have a set of 5000 strings of length 4, where each character in the string can be either A, B, C, or D. 0-order Markov Chain (no dependency), makes a 4*1 array of columns A, B, C, D. 1-order Markov Chain (pos j depends on previous pos i), makes a…
user1830307
1
vote
1 answer

Moving Between States in a Markov Model - How to Tell R?

I have been struggling with this problem for quite a while and any help would be much appreciated. I am trying to write a function to calculate a transition matrix from observed data for a markov model. My initial data I am using to build the…
Cdog
  • 11
  • 1