0

In Probabilistic-Programming-and-Bayesian-Methods-for-Hackers, a method is proposed to compute the p value that two proportions are different.

(You can find the jupyter notebook here containing the entire chapter http://nbviewer.jupyter.org/github/CamDavidsonPilon/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers/blob/master/Chapter2_MorePyMC/Ch2_MorePyMC_PyMC2.ipynb)

The code is the following:

import pymc3 as pm
figsize(12, 4)

#these two quantities are unknown to us.
true_p_A = 0.05
true_p_B = 0.04


N_A = 1700
N_B = 1700

#generate some observations
observations_A = bernoulli.rvs(true_p_A, size=N_A)
observations_B = bernoulli.rvs(true_p_B, size=N_B)

print(np.mean(observations_A))
print(np.mean(observations_B))

0.04058823529411765
0.03411764705882353

# Set up the pymc3 model. Again assume Uniform priors for p_A and p_B.
with pm.Model() as model:
    p_A = pm.Uniform("p_A", 0, 1)
    p_B = pm.Uniform("p_B", 0, 1)

    # Define the deterministic delta function. This is our unknown of interest.
    delta = pm.Deterministic("delta", p_A - p_B)


    # Set of observations, in this case we have two observation datasets.
    obs_A = pm.Bernoulli("obs_A", p_A, observed=observations_A)
    obs_B = pm.Bernoulli("obs_B", p_B, observed=observations_B)

    # To be explained in chapter 3.
    step = pm.Metropolis()
    trace = pm.sample(20000, step=step)
    burned_trace=trace[1000:]

p_A_samples = burned_trace["p_A"]
p_B_samples = burned_trace["p_B"]
delta_samples = burned_trace["delta"]

# Count the number of samples less than 0, i.e. the area under the curve
# before 0, represent the probability that site A is worse than site B.
print("Probability site A is WORSE than site B: %.3f" % \
    np.mean(delta_samples < 0))

print("Probability site A is BETTER than site B: %.3f" % \
    np.mean(delta_samples > 0))

Probability site A is WORSE than site B: 0.167
Probability site A is BETTER than site B: 0.833

However, if we compute the p value using statsmodels, we get a very different result:

from scipy.stats import norm, chi2_contingency
import statsmodels.api as sm


s1 = int(1700 * 0.04058823529411765)
n1 = 1700
s2 = int(1700 * 0.03411764705882353)
n2 = 1700
p1 = s1/n1
p2 = s2/n2
p = (s1 + s2)/(n1+n2)
z = (p2-p1)/ ((p*(1-p)*((1/n1)+(1/n2)))**0.5)

z1, p_value1 = sm.stats.proportions_ztest([s1, s2], [n1, n2])

print('z1 is {0} and p is {1}'.format(z1, p))

z1 is 0.9948492584166934 and p is 0.03735294117647059

With MCMC, the p value seems to be 0.167, but using statsmodels, we get a p value 0.037.

How can I understand this?

halfer
  • 19,824
  • 17
  • 99
  • 186
user8270077
  • 4,621
  • 17
  • 75
  • 140

1 Answers1

1

Looks like you printed the wrong value. Try this instead:

print('z1 is {0} and p is {1}'.format(z1, p_value1))

Also, if you want to test the hypothesis p_A > p_B then you should set the alternative parameter in the function call to larger like so:

z1, p_value1 = sm.stats.proportions_ztest([s1, s2], [n1, n2], alternative='larger')

The docs have more examples on how to use it.

Willian Fuks
  • 11,259
  • 10
  • 50
  • 74