Questions tagged [markov-models]

Markov chain

The simplest Markov model is the Markov chain. It models the state of a system with a random variable that changes through time. In this context, the Markov property suggests that the distribution for this variable depends only on the distribution of the previous state. An example use of a Markov chain is Markov Chain Monte Carlo, which uses the Markov property to prove that a particular method for performing a random walk will sample from the joint distribution of a system.

Hidden Markov model

A hidden Markov model is a Markov chain for which the state is only partially observable. In other words, observations are related to the state of the system, but they are typically insufficient to precisely determine the state. Several well-known algorithms for hidden Markov models exist. For example, given a sequence of observations, the Viterbi algorithm will compute the most-likely corresponding sequence of states, the forward algorithm will compute the probability of the sequence of observations, and the Baum–Welch algorithm will estimate the starting probabilities, the transition function, and the observation function of a hidden Markov model. One common use is for speech recognition, where the observed data is the speech audio waveform and the hidden state is the spoken text. In this example, the Viterbi algorithm finds the most likely sequence of spoken words given the speech audio.

Markov decision process

A Markov decision process is a Markov chain in which state transitions depend on the current state and an action vector that is applied to the system. Typically, a Markov decision process is used to compute a policy of actions that will maximize some utility with respect to expected rewards. It is closely related to Reinforcement learning, and can be solved with value iteration and related methods.

Partially observable Markov decision process

A partially observable Markov decision process (POMDP) is a Markov decision process in which the state of the system is only partially observed. POMDPs are known to be NP complete, but recent approximation techniques have made them useful for a variety of applications, such as controlling simple agents or robots.

Markov random field

A Markov random field, or Markov network, may be considered to be a generalization of a Markov chain in multiple dimensions. In a Markov chain, state depends only on the previous state in time, whereas in a Markov random field, each state depends on its neighbors in any of multiple directions. A Markov random field may be visualized as a field or graph of random variables, where the distribution of each random variable depends on the neighboring variables with which it is connected. More specifically, the joint distribution for any random variable in the graph can be computed as the product of the "clique potentials" of all the cliques in the graph that contain that random variable. Modeling a problem as a Markov random field is useful because it implies that the joint distributions at each vertex in the graph may be computed in this manner.

Hierarchical Markov Models

Hierarchical Markov Models can be applied to categorize human behavior at various levels of abstraction. For example, a series of simple observations, such as a person's location in a room, can be interpreted to determine more complex information, such as in what task or activity the person is performing. Two kinds of Hierarchical Markov Models are the Hierarchical hidden Markov model and the Abstract Hidden Markov Model. Both have been used for behavior recognition, and certain conditional independence properties between different levels of abstraction in the model allow for faster learning and inference.

100 questions
2
votes
1 answer

How to find markov blanket for a node?

I want to do feature selection using markov blanket algorithm. I am wondering is there any API in java/weka or in python to find the markov blanket . Consider I have a dataset. The dataset has number of variables and one one target variable. I want…
Rashida Hasan
  • 149
  • 3
  • 13
2
votes
0 answers

Using a Markov Model to analyze a text input in Java

I'm very new to Java and I'm required to use a Markov Model to analyze a text (String) input. I will be honest: this is for an assignment. I am looking to learn how to answer it, not just copy and paste code. The code I am working with (this is…
K Lee
  • 21
  • 1
2
votes
1 answer

Find the path with the max likelihood between two vertices in markov model

Given a markov model, which has a start state named S and an exit state named F, and this model can be represented as a directed graph, with some constraints: Every edge has some weight falls in the range (0,1] as the transition…
2
votes
0 answers

markov first order text processing in python

I wrote codes for text generating from a given text file. I use markov first order model. First create dictionary from text file. In case of punctuation ('.','?','!') it key is '$'. After creating dictionary I generate text randomly from the created…
ohid
  • 824
  • 2
  • 8
  • 23
2
votes
0 answers

markov decision process / stochastic optimal control solver c/c++

i am looking for solver for solver/optimizer for markov decision process / stochastic optimal control problem (see also Sequential Decision Making under Uncertainty. The problem is described by set of differential equations but it my by discretized…
2
votes
2 answers

Artificial neural networks and Markov Processes

I read a little about ANN and Markov process. Can someone please help me in understanding where exactly Markov process fits in with ANN and genetic algorithms. Or simply, what could be the role of Markov processes in this scenario. Thanks alot
Shahzad
  • 1,999
  • 6
  • 35
  • 44
2
votes
2 answers

Library for a Markov Decision Process in C#

I'm working on a project to create an AI engine, where a robot is exploring a 2D gridded world and has to decide what square to move to next. Are there existing Markov libraries that could be used (ie. I would just change the parameters), or samples…
1
vote
0 answers

LMest: problem introducing covariates to the measurement model when fitting a Latent Markov Model to continuous data

I am working with longitudinal continuous data that reflect the linguistic abilities of children. In that regard I seek to make a Latent Transition Model, more exact a Latent Markov Model using the LMest package in R. As far as I have understood…
1
vote
1 answer

The R mstate package takes data that has a "status" variable. The "status" can either be 0 or 1. What does 0 mean and what does 1 mean?

This is some example data. Below is a quote from the paper (Wreede et al 2010) regarding the "status" variable: "We need one line for each individual for each transition for which he/she is at risk, containing data about her/his identity (id), the…
1
vote
1 answer

How can I obtain the attribution of a channel per consumer in their purchase decision with an attribution model (markov chain)?

in the last days I have been working with markov chain for a multi touch (data driven) attribution model, I have found too much important information at the macro level, for example, the ChannelAttribution package gives me the attribution of each…
1
vote
1 answer

How do I make Simpy simulation to depict a markovian M/M/1 process?

output printing the len of arrival and service timesI am trying to implement an M/M/1 markovian process with exponential inter arrival and exponential service times using simpy. The code runs fine but I dont quite get the expected results. Also the…
1
vote
2 answers

When I try to implement MarkovModel using pgmpy, is there a way to fix KeyError?

I'm trying to implement Markov Random Field. Among them, I would like to obtain a value of phi(A|B = 0, C = 1). However, with the evidence option, KeyError: 'B' occurs. I don't know why this happens. Below is the code. from pgmpy.inference import…
prior
  • 13
  • 3
1
vote
1 answer

Generating Markov transition matrix for continuous data in Python

I am exploring the hidden markov model(HMM) to analyse the sequence of new cases and reproduction rate of covid-19. I have come across a scenarios where I need to generate a transition matrix for the continuous data. X =…
RajeshDA
  • 481
  • 2
  • 13
1
vote
1 answer

What are the states and rewards in the reward matrix?

This code : R = ql.matrix([ [0,0,0,0,1,0], [0,0,0,1,0,1], [0,0,100,1,0,0], [0,1,1,0,1,0], [1,0,0,1,0,0], [0,1,0,0,0,0] ]) is from…
blue-sky
  • 51,962
  • 152
  • 427
  • 752
1
vote
0 answers

Probabilistic Sensitivity Analysis for Markov Models using Heemod in R

I'm new to R and have a project assigned, whereby I have to build a cost-effectiveness model in R. It's based on a Markov Model, I'm currently just trying to get used to the interface and have installed the heemod package to assist in producing…
Health-eco
  • 11
  • 1