0

I would like to calculate the marginal likelihood of a model given a dataset in order to compare it with another, thanks to the Bayes factor.

I used PyMC 2 to get the post distributions of each parameter for each model.

Here is the principle (I used MCMC) :

## PRIOR
myPar  = pymc.Normal( name='Parameter',  mu=0.3,    tau=1/0.2**2,     value=0.3)

## LOG LIKELIHOOD
@pymc.stochastic(observed=True)
def mesLL(myPar = myPar, value = Obs):
    loglike = 0.0
    for i in range(len(value)):
        myMean = model(myPar)
        myStd2 = sigMes**2
        loglike += pymc.normal_like(value[i], mu = myMean, tau = 1./myStd2)
    return loglike

## SAMPLER
np.random.seed(123456)
pymc.numpy.random.seed(123456)
#
ModBayes = pymc.Model([myPar,mesLL])
sampler  = pymc.MCMC(ModBayes)
sampler.use_step_method(pymc.AdaptiveMetropolis, [myPar])
sampler.sample(iter = 10000, burn = 4000, thin = 3)

Now I don't know how to implement the marginal likelihood.

Thank you in advance.

1 Answers1

0

The approximate marginal distribution of each of the sampled parameters is the frequency plot of sampled values of the parameters. PyMC2 lacks the more complete plotting tools of PyMC3 (and now ArviZ), but you can simply use matplotlib (similar to what is done in the example in the docs). In this case, it would be something like

from matplotlib.pyplot import hist

hist(ModBayes.trace('myPar')[:], density=True)
hist(ModBayes.trace('mesLL')[:], density=True)
merv
  • 67,214
  • 13
  • 180
  • 245