0

I have the following model in pymc2:

import pymc
from scipy.stats import gamma

alpha = pymc.Uniform('alpha', 0.01, 2.0)
scale = pymc.Uniform('scale', 1.0, 4.0)

@pymc.deterministic(plot=False)
def beta(scale=scale):
    return 1.0 / scale

@pymc.potential
def p_factor(alpha=alpha, scale=scale, lmin=lmin, n=len(sample)):
    dist = gamma(alpha, loc=0., scale=scale)
    fp = 1.0 - dist.cdf(lmin)
    return -(n+1)*np.log(fp)

obs = pymc.Gamma("obs", alpha=alpha, beta=beta, value=sample, observed=True)

The physical background of this model is the luminosity function of galaxies (LF), i.e., the probability of a galaxy having luminosity L. For some types of galaxies, the LF is just a gamma function. The potential accounts for data truncation, as galaxy surveys usually miss a substantial fraction of the targets, particularly those of low luminosity. In this model I miss everything below lmin

Details of this method can be found in this paper by Kelly et al.

This model works: I run MAP and MCMC on the model and I can recover the parameters alpha and scale from my simulated data sample, with increased uncertainty as lmin grows.

Now I would like to insert gaussian measurement errors. For simplicity all the data has the same precision. I'm not modifying the potential to include the errors also.

alpha = pymc.Uniform('alpha', 0.01, 2.0)
scale = pymc.Uniform('scale',1.0, 4.0)
sig = 0.1
tau = math.pow(sig, -2.0)  

@pymc.deterministic(plot=False)
def beta(scale=scale):
    return 1.0 / scale

@pymc.potential
def p_factor(alpha=alpha, scale=scale, lmin=lmin, n=len(sample)):
    dist = gamma(alpha, loc=0., scale=scale)
    fp = 1.0 - dist.cdf(lmin)
    return -(n+1) * np.log(fp)

dist = pymc.Gamma("dist", alpha=alpha, beta=beta)
obs = pymc.Normal("obs", mu=dist, tau=tau, value=sample, observed=True)

But surely I'm doing something wrong here because this model does not work. When I run pymc.MAPon this model I recover the initial values of alpha and scale

vals = {'alpha': alpha, 'scale': scale, 'beta': beta, 
   'p_factor': p_factor, 'obs': obs, 'dist': dist}
M2 = pymc.MAP(vals)
M2.fit()
print M2.alpha.value, M2.scale.value
>>> (array(0.010000000006018368), array(1.000000000833973))

When I run pymc.MCMC, alpha and beta are no traced at all.

M = pymc.MCMC(vals)
M.sample(10000, burn=5000)
...
M.stats()['alpha']
>>> {'95% HPD interval': array([ 0.01000001,  0.01000502]),
'mc error': 2.1442678276712383e-07,
'mean': 0.010001588137798096,
'n': 5000,
'quantiles': {2.5: 0.0100000088679046,
25: 0.010000382359859467,
50: 0.010001100377476166,
75: 0.010001668672799679,
97.5: 0.0100050194240779},
'standard deviation': 2.189828287191421e-06}

again initial values. In fact if I change alpha to start in, say, 0.02, the recovered values of alpha is 0.02.

This is a notebook with the working model plus simulated data.

This is a notebook with the error model plus simulated data.

Any guidance on making this work would be really appreciated.

Sergio
  • 677
  • 1
  • 10
  • 20

1 Answers1

1

It seems that is enough to change

dist = pymc.Gamma("dist", alpha=alpha, beta=beta)

by

dist = pymc.Gamma("dist", alpha=alpha, beta=beta, value=sample)

The sampled data is a reasonable initial value for dist. Anyway, I do no get the logic, as other initial values (such as an array of zeros) bring back the problem of not sampling alpha and beta again.

Sergio
  • 677
  • 1
  • 10
  • 20