Questions tagged [markov]

Markov, or markov property refers to the memoryless property of a stochastic process.

Overview

From Wikipedia,

A stochastic process has the Markov property if the conditional probability distribution of future states of the process depends only upon the present state, not on the sequence of events that preceded it.

A Markov process (Yt) can be expressed like below:

enter image description here

Tag usage

Please consider stack-exchange's Cross Validated SE for asking questions concerning statistics data analysis.

255 questions
0
votes
1 answer

How to forecast with new dataset by "MarkovAutoregression" model in Statsmodels?

I'd like to fit a MarkovAutoregression model with training time-seriese dataset(train_data) and make it forecast with validation time-seriese dataset(val_data). Training part is like below and I don't find any errors. import numpy as np from…
Ihmon
  • 183
  • 1
  • 13
0
votes
0 answers

Estimate Lazy-Gap using PPO actor-critic framework

I am trying to implement a "Lazy-MDP" agent in my RL algorithm. My reference for this is…
0
votes
1 answer

Problem with migrating Metropolis-Hastings algorithm from R to Python

New to Python. I am trying to port the R code for the Metropolis Hastings Algorithm found here over to Python. I have successfully duplicated the results in R, but am struggling with the Python version. I just can't seem to obtain anything close…
user1884367
  • 425
  • 2
  • 7
  • 15
0
votes
1 answer

Generate a Markov chain in Python using an object's attribute as state

Suppose I have several different states in the form of an Enum: class State(Enum): State1 = 1 State2 = 2 State3 = 3 I define transition probabilities between the states: transition_probabilities = [ [0.8, 0.1, 0.1], [0.2, 0.5,…
Dan Jackson
  • 185
  • 1
  • 9
0
votes
0 answers

Markov Descion Process: Computing the Q Value Question

I am working on four functions to implement the MDP in python. I need help understanding how I would calculate the next state value V(s'). I know the equation Q_value = Reward + Y(Gamma)*(Summation(Probability of success * next_state)). And for the…
0
votes
0 answers

Transition probabilities from a Weibull distribution (Health Economic Evaluation)

I'm trying to understand the logic to get transition probabilities of the book "Decision Modelling for Health Economic Evaluations" from Briggs. In chapter 3 they give the STATA output: And with the formula tp = 1-exp(lambda(t-u)^gamma -…
Britt Bay
  • 17
  • 3
0
votes
1 answer

Grey-Markov method in R

In R, I have loaded the built-in time series: AirPassengers and split it in train- and testdata like this: rm(list = ls()) data = AirPassengers traindata = ts(data[1:(0.75*length(data))], frequency = 12) testdata =…
Frank
  • 31
  • 5
0
votes
0 answers

Write a Markov algorithm that converts unary numbers to decimal

I want to write a Markov algorithm that converts numbers such as ||| (3) ||||| (5) and so on into, decimal numbers So I'm studying algorithms for fun and I came over this problem . I seem to understand how to this for numbers smaller than 100, but…
Nadia
  • 3
  • 1
0
votes
0 answers

Generate Markov Transform Matrix from Time Series Data in Spark Dataframe

I have a question of the most efficient way to generate markov tranform maxtrix for about one million IDs. Each account has 24 hours/day x 90 days/year x 10 years data. The data are in the format as…
Cu Buffalo
  • 59
  • 2
0
votes
0 answers

markov-chain: understand estimation of missing transition probabilities

The following markov-chain is given with missing transition probabilities p and q is given: It is also known how often the different final states occur. A = ['E', 'D', 'F', 'D', 'D', 'F', 'E', 'D', 'F', 'F', 'D', 'E'] The goal is to estimate p and…
0
votes
0 answers

Making discord bot following Markov tutorial. TypeError: Client.__init__() missing 1 required keyword-only argument: 'intents'

When trying to create a discord bot and having the virtual environment running i initialize command py markov.py. Here is the source code followed by the error that is returned. """A Markov chain generator that can tweet random messages.""" import…
0
votes
0 answers

How to figure building criteria of Markov matrix?

I have built the matrix below (sim1) with the following function. MarkovCohort <- function(P, z0, ncycles, costs, qolw, discount){ # Calculates quantities for cost-effectiveness analysis using a markov # cohort model. # #…
12666727b9
  • 1,133
  • 1
  • 8
  • 22
0
votes
1 answer

Cannot acces to MarkovCohort() function

I am trying to replicate the following tutorial (https://devinincerti.com/2015/10/15/markov_cohort.html) Unfortunately, I am not able to find out where the package that enables working with the MarkovCohort() function comes from. I have just pasted…
12666727b9
  • 1,133
  • 1
  • 8
  • 22
0
votes
1 answer

Markov Inequality Plot in R

I have a Normal distribution CCDF plot made in R. I need to apply Markov inequality to this data and plot them at this same plot. How can I implement it? Any help is welcome. My data and what I have: n01 <- rnorm(1000, mean = 27947623, sd =…
penelope
  • 15
  • 3
0
votes
1 answer

How do I generate sentences with Rita.js?

I installed Rita through Node, following a RiTa.js egghead.io tutorial. After running the following code in my rita.js file on terminal, it shows a lot of data and says Error: No valid sentence-starts remaining var rita= require('rita'); var…
Idamara
  • 1
  • 2