1

I would like to do a simple joint Wald test on my fixed-effects regression coefficients but I want to set the restriction to something other than zero. More specifically I would like to test: H0: ai=0 and b=1 for every i or basically, whether the extracted intercepts from fixed effects model (ai) (I know there is no intercept in fixed-effects model but you can still extract them through fixef() command and they should be close to zero if fixed-effects model is the correct model) is equal to zero for each i and my coefficients (bi) are equal to 1.

Here is what I have:

library(plm)


form <- R_excess ~ I(beta_MKT_RF*MKT_RF) + I(beta_HML*HML) + I(beta_SMB*SMB)
reg1 <- plm(form, data=nlspd, model="within")

summary(reg1, vcov =function(x) vcovSCC(x, type="HC3", maxlag=12))

And here is the output as you can see my coefficients are all close to 1:

Call:
plm(formula = form, data = nlspd, model = "within")

Balanced Panel: n = 10, T = 624, N = 6240

Residuals:
       Min.     1st Qu.      Median     3rd Qu.        Max. 
-7.8706e-02 -9.0319e-03  3.8278e-05  8.9624e-03  1.1349e-01 

Coefficients:
                         Estimate Std. Error t-value  Pr(>|t|)    
I(beta_MKT_RF * MKT_RF) 1.0023818  0.0072942 137.422 < 2.2e-16 ***
I(beta_HML * HML)       0.9985867  0.0527123  18.944 < 2.2e-16 ***
I(beta_SMB * SMB)       0.9731437  0.0355880  27.345 < 2.2e-16 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Total Sum of Squares:    18.067
Residual Sum of Squares: 1.5037
R-Squared:      0.91677
Adj. R-Squared: 0.91661
F-statistic: 7808.71 on 3 and 623 DF, p-value: < 2.22e-16

I can also get the fixed-effect intercepts ai by using:

summary(fixef(reg1), vcov =function(x) vcovSCC(x, type="HC3", maxlag=12))
      Estimate  Std. Error t-value  Pr(>|t|)    
1   0.00127680  0.00062245  2.0512  0.040285 *  
2   0.00136923  0.00062251  2.1995  0.027877 *  
3   0.00104805  0.00062246  1.6837  0.092283 .  
4   0.00132979  0.00062259  2.1359  0.032727 *  
5  -0.00061048  0.00062252 -0.9807  0.326795    
6   0.00085262  0.00062247  1.3697  0.170816    
7  -0.00104724  0.00062250 -1.6823  0.092557 .  
8  -0.00089731  0.00062275 -1.4409  0.149672    
9  -0.00174805  0.00062292 -2.8062  0.005028 ** 
10 -0.00271173  0.00062343 -4.3497 1.385e-05 ***

Now I want to do the joint wald test on these coefficients to test whether for every i: H0: ai =0 and b=1.

Edit: This is different from F-test on Fixed effects since I'm testing against a non-zero hypothesis.

Erwin Rhine
  • 303
  • 2
  • 11
  • Possible duplicate of [F-test on Fixed Effects in R (Panel Data)](https://stackoverflow.com/questions/6171138/f-test-on-fixed-effects-in-r-panel-data) – Helix123 Apr 14 '19 at 19:12
  • Unfortunately not, I'm looking into testing a specific hypothesis instead of the usual F-test which assumes the regressors are zero under null. – Erwin Rhine Apr 14 '19 at 21:19
  • @ErwinRhine, does my answer solve your problem? – Julius Vainora Apr 17 '19 at 12:09

1 Answers1

1

The question mentioned in the comments by @Helix123 isn't doing exactly what you need (and neither it is about testing that all the coefficients are zero), but it is related. In particular, if you wanted to test only that the fixed effects are equal to zero, you could find answers there.

In your case, however, in addition to the hypothesis about the fixed effects we also test whether all the other coefficients take some nonzero value. Here's why it creates problems.

If you wanted to test that, say, I(beta_HML * HML) has a zero coefficient, then the restricted model provided to pFtest(see the accepted answer in the linked question) would be reg2 as in

form <- R_excess ~ -1 + I(beta_MKT_RF * MKT_RF) + I(beta_SMB * SMB)
reg2 <- plm(form, data = nlspd, model = "pooling") # Note "pooling", which sets fixed effects to zero

If you wanted to test that the coefficient of this variable is 1, then you could use reg3 in

form <- R_excess - I(beta_HML * HML) ~ -1 + I(beta_MKT_RF * MKT_RF) + I(beta_SMB * SMB)
reg3 <- plm(form, data = nlspd, model = "pooling") # Note "pooling", which sets fixed effects to zero

Since your hypothesis is about all three remaining coefficients, we actually wouldn't have anything to estimate on the right. It happens that plm doesn't like that and throws empty model error.

If we were using lm, there would be another option to use, say, offset(beta_MKT_RF * MKT_RF) in the formula, which would fix the coefficient to 1 and it would not be estimated. However, plm doesn't allow for offset.

That said, it seems that the easier option is to use lm, just as suggested in the linked question. In particular,

data("Produc", package = "plm")
mU <- lm(log(gsp) ~ -1 + log(emp) + factor(state), data = Produc)
mR <- lm(log(gsp) ~ -1 + offset(log(emp)), data = Produc)
library(lmtest)
lrtest(mR, mU)
# Likelihood ratio test
#
# Model 1: log(gsp) ~ -1 + offset(log(emp))
# Model 2: log(gsp) ~ -1 + log(emp) + factor(state)
#   #Df  LogLik Df  Chisq Pr(>Chisq)    
# 1   1 -2187.9                         
# 2  50  1467.3 49 7310.4  < 2.2e-16 ***
# ---
# Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

so that mU contains fixed effects and unrestrictedly estimates the effect of log(emp), while mR contains no fixed effects and fixed the effect of log(emp) to 1.

You didn't provide your data, but it should be close to

mU <- lm(R_excess ~ -1 + I(beta_MKT_RF * MKT_RF) + I(beta_HML * HML) +
           I(beta_SMB * SMB) + factor(var), data = nlspd)
mR <- lm(R_excess ~ -1 + offset(beta_MKT_RF * MKT_RF) + offset(beta_HML * HML) +
           offset(beta_SMB * SMB), data = nlspd)
lrtest(mR, mU)

where var is the cross-sectional dimension variable.

Julius Vainora
  • 47,421
  • 9
  • 90
  • 102