1

I have two hypotheses (A and B)
H0_A b1<=0; H1_A: b1 >0
H0_B b2>=0; H1_B: b2 <0

To estimate the coefficients b1 and b2 I ran a regression lm(y~x1+x2).

My question: how can I get the p-value for every coefficient (b1, b2), accodring to its hypothesis setting, to see if I can reject the null-hypothesis?

When I use the summary()-function on the regression, the p-values are stated, but I think they only consider the case that the beta is unequal to zero.

Thank you very much!!

StupidWolf
  • 45,075
  • 17
  • 40
  • 72
StableSong
  • 79
  • 11
  • 2
    this is not terribly hard to do by extracting the t-statistic from the summary output and using the `pt()` function to get the appropriate tail values, but it would be easier to answer if you could give a [mcve] ... in fact, searching for "R one-tailed regression test" finds an answer [here](https://stats.stackexchange.com/questions/325354/if-and-how-to-use-one-tailed-testing-in-multiple-regression) ... – Ben Bolker May 20 '20 at 17:17
  • 2
    Also - assuming that the estimated coefficient is in the direction of the alternative hypothesis then the p-value will just be half that of what is displayed. – Dason May 20 '20 at 17:18

1 Answers1

2

The lm() function defaults to a two-sided alternative hypothesis test. As a cautionary note, you should default to a two-sided alternative, unless you have a strong theoretical basis, a priori, to only be interested in one side. Reproducible examples help the community serve you better. I recommended some code below to help extract your p-values. Adjust the distribution function as needed.

# Extracting your p-values (two-sided alternative)

mod <- lm(y ~ x1 + x2, data = ...)
summary(mod)$coefficient[ ,"Pr(>|t|)"]

# Adjusting you're rejection regions

output <- summary( lm(y ~ x1 + x2, data = ...) )

t <- coef(output)[ ,3]   # extracting the t-values
df <- output$df          # extracting the degrees of freedom
pt(t, df, lower = ...)   # lower = TRUE/FALSE (b < 0 or b > 0, respectively)
Thomas Bilach
  • 591
  • 2
  • 16