I am looking into PyMC3 package where I was interested in implementing the package in a scenario where I have several different signals and each signal has different amplitudes.
However, I am stuck on what type of priors I would need to use in order to implement PyMC3 into it and likelihood distribution to implement. Scenario example is shown in the following image:
I tried to implement it here, but, every time I keep on getting the error:
pymc3.exceptions.SamplingError: Bad initial energy
My Code
## Signal 1:
with pm.Model() as model:
# Parameters:
# Prior Distributions:
# BoundedNormal = pm.Bound(pm.Exponential, lower=0.0, upper=np.inf)
# c = BoundedNormal('c', lam=10)
# c = pm.Uniform('c', lower=0, upper=300)
alpha = pm.Normal('alpha', mu = 0, sd = 10)
beta = pm.Normal('beta', mu = 0, sd = 1)
sigma = pm.HalfNormal('sigma', sd = 1)
mu = pm.Normal('mu', mu=0, sigma=1)
sd = pm.HalfNormal('sd', sigma=1)
# Observed data is from a Multinomial distribution:
# Likelihood distributions:
# bradford = pm.DensityDist('observed_data', logp=bradford_logp, observed=dict(value=S1, loc=mu, scale=sd, c=c))
# observed_data = pm.Beta('observed_data', mu=mu, sd=sd, observed=S1)
observed_data = pm.Beta('observed_data', alpha=alpha, beta=beta, mu=mu, sd=sd, observed=S1)
with model:
# obtain starting values via MAP
startvals = pm.find_MAP(model=model)
# instantiate sampler
# step = pm.Metropolis()
step = pm.HamiltonianMC()
# step = pm.NUTS()
# draw 5000 posterior samples
trace = pm.sample(start=startvals, draws=1000, step=step, tune=500, chains=4, cores=1, discard_tuned_samples=True)
# Obtaining Posterior Predictive Sampling:
post_pred = pm.sample_posterior_predictive(trace, samples=500)
print(post_pred['observed_data'].shape)
plt.title('Trace Plot of Signal 1')
pm.traceplot(trace, var_names=['mu', 'sd'], divergences=None, combined=True)
plt.show(block=False)
plt.pause(5) # Pauses the program for 5 seconds
plt.close('all')
pm.plot_posterior(trace, var_names=['mu', 'sd'])
plt.title('Posterior Plot of Signal 1')
plt.show(block=False)
plt.pause(5) # Pauses the program for 5 seconds
plt.close('all')
Side questions
I also have been looking into the idea of implementing goodness to fitness test and Kalman filter while using different distributions other than Gaussian distribution, so, if you have the time I would appreciate it if can you have a look at them?. Both questions can be found here:
Goodness-to-fit tests link: Goodness-to-fit test
Kalman filter link: Kalman Filter
Edit 1
Say I have around 5 signals and would like to implement the Bayesian interface in order to see the difference in the PDF of the signals. How do I approach this problem? Do I need to create multiple models and get their posterior distribution? Like in this image:
If I need to get the posterior distribution, Do I use the following code?
# Obtaining Posterior Predictive Sampling:
post_pred = pm.sample_posterior_predictive(trace, samples=500)
Edit 2
If I have multiple signals can I implement it this way in order to see changes in alpha
and beta
in all signals?
observed_data_S1 = pm.Beta('observed_data_S1', alpha=alpha[0], beta=beta[0], observed=S1[0])
observed_data_S2 = pm.Beta('observed_data_S2', alpha=alpha[1], beta=beta[1], observed=S2[0])
observed_data_S3 = pm.Beta('observed_data_S3', alpha=alpha[2], beta=beta[2], observed=S3[0])
observed_data_S4 = pm.Beta('observed_data_S4', alpha=alpha[3], beta=beta[3], observed=S4[0])
observed_data_S5 = pm.Beta('observed_data_S5', alpha=alpha[4], beta=beta[4], observed=S5[0])
observed_data_S6 = pm.Beta('observed_data_S6', alpha=alpha[5], beta=beta[5], observed=S6[0])
Edit 3:
how can I plot multiple traces in one plot? because I am looking at multiple signals and thought of combing all alphas and betas together in one plot.