The post Retrieve model estimates from statsmodels shows that you can get a p-value for beta from a linear regression model using model.pvalues
. To be specific, using the same example as in that post you could get it directly using sm.OLS(df['y'], x).fit().pvalues[1]
. This would be under the assumption that H0=0
. Let's say that I'd like to know if the slope of a linear model is significantly different from 1 instead? Is there a straight-forward way to adjust H0
, and retrieve the p-value accordingly?
Edit: Example
Building on the post above and the comment from Josef, here's a potential setup:
Code:
import pandas as pd
import numpy as np
import statsmodels.api as sm
# A dataframe with two variables
np.random.seed(123)
rows = 12
rng = pd.date_range('1/1/2017', periods=rows, freq='D')
df = pd.DataFrame(np.random.randint(100,150,size=(rows, 2)), columns=['y', 'x'])
df = df.set_index(rng)
x = sm.add_constant(df['x'])
model = sm.OLS(df['y'], x).fit()
model.summary()
Output (abbrevaited):
==============================================================================
coef std err t P>|t| [0.025 0.975]
------------------------------------------------------------------------------
const 176.6364 20.546 8.597 0.000 130.858 222.415
x -0.3572 0.158 -2.261 0.047 -0.709 -0.005
==============================================================================
From dir(model)
I've found 'wald_test'
and 'wald_test_terms'
. How could you use any(?) of these to test a hypotheses that beta = 1? H0:beta=1, HA beta!=1
Thank you for any suggestions.