4

By default lm summary test slope coefficient equal to zero. My question is very basic. I want to know how to test slope coefficient equal to non-zero value. One approach could be to use confint but this does not provide p-value. I also wonder how to do one-sided test with lm.

ctl <- c(4.17,5.58,5.18,6.11,4.50,4.61,5.17,4.53,5.33,5.14)
trt <- c(4.81,4.17,4.41,3.59,5.87,3.83,6.03,4.89,4.32,4.69)
group <- gl(2,10,20, labels=c("Ctl","Trt"))
weight <- c(ctl, trt)
lm.D9 <- lm(weight ~ group)
summary(lm.D9)

Call:
lm(formula = weight ~ group)

Residuals:
    Min      1Q  Median      3Q     Max 
-1.0710 -0.4938  0.0685  0.2462  1.3690 

Coefficients:
            Estimate Std. Error t value Pr(>|t|)    
(Intercept)   5.0320     0.2202  22.850 9.55e-15 ***
groupTrt     -0.3710     0.3114  -1.191    0.249    
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 

Residual standard error: 0.6964 on 18 degrees of freedom
Multiple R-squared: 0.07308,    Adjusted R-squared: 0.02158 
F-statistic: 1.419 on 1 and 18 DF,  p-value: 0.249 


confint(lm.D9)
              2.5 %    97.5 %
(Intercept)  4.56934 5.4946602
groupTrt    -1.02530 0.2833003

Thanks for your time and effort.

MYaseen208
  • 22,666
  • 37
  • 165
  • 309

6 Answers6

8

as @power says, you can do by your hand. here is an example:

> est <- summary.lm(lm.D9)$coef[2, 1]
> se <- summary.lm(lm.D9)$coef[2, 2]
> df <- summary.lm(lm.D9)$df[2]
> 
> m <- 0
> 2 * abs(pt((est-m)/se, df))
[1] 0.2490232
> 
> m <- 0.2
> 2 * abs(pt((est-m)/se, df))
[1] 0.08332659

and you can do one-side test by omitting 2*.

UPDATES

here is an example of two-side and one-side probability:

> m <- 0.2
> 
> # two-side probability
> 2 * abs(pt((est-m)/se, df))
[1] 0.08332659
> 
> # one-side, upper (i.e., greater than 0.2)
> pt((est-m)/se, df, lower.tail = FALSE)
[1] 0.9583367
> 
> # one-side, lower (i.e., less than 0.2)
> pt((est-m)/se, df, lower.tail = TRUE)
[1] 0.0416633

note that sum of upper and lower probabilities is exactly 1.

kohske
  • 65,572
  • 8
  • 165
  • 155
  • I don't think "omitting the 2" is statistically correct. I think you need to substitute 1.644854 = qnorm(.95), and then only look in direction specified by the ast yet unstated hypothesis. – IRTFM Nov 11 '11 at 05:22
  • 2
    That looks better. Maybe I didn't understand what you were doing before. I am more comfortable looking at mean + 1.64*se (or mean - 1.64*se depending on the specific hypothesis) versus 0. It seems to me that most people who ask this question have a marginal result that they are just trying to push across the artificial "finish line" of 0.05. – IRTFM Nov 11 '11 at 05:55
  • Or use `t.test` on the data directly; see my answer below. – James Nov 11 '11 at 11:29
3

Use the linearHypothesis function from car package. For instance, you can check if the coefficient of groupTrt equals -1 using.

linearHypothesis(lm.D9, "groupTrt = -1")

Linear hypothesis test

Hypothesis:
groupTrt = - 1

Model 1: restricted model
Model 2: weight ~ group

  Res.Df     RSS Df Sum of Sq      F  Pr(>F)  
1     19 10.7075                              
2     18  8.7292  1    1.9782 4.0791 0.05856 .
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Ramnath
  • 54,439
  • 16
  • 125
  • 152
  • Thanks a lot for your nice answer. I wonder how to do one-sided test. I tried this with `groupTrt >= -1` but it did not work. – MYaseen208 Nov 11 '11 at 05:06
  • How about halving the *p* value ... ?? (That would be the standard answer for "how do I get a one-sided *p* value", although you should check the comments below @kohske's answer) – Ben Bolker Nov 11 '11 at 23:37
1

The smatr package has a slope.test() function with which you can use OLS.

kmm
  • 6,045
  • 7
  • 43
  • 53
1

In addition to all the other good answers, you could use an offset. It's a little trickier with categorical predictors, because you need to know the coding.

lm(weight~group+offset(1*(group=="Trt")))

The 1* here is unnecessary but is put in to emphasize that you are testing against the hypothesis that the difference is 1 (if you want to test against a hypothesis of a difference of d, then use d*(group=="Trt")

Ben Bolker
  • 211,554
  • 25
  • 370
  • 453
0

You can use t.test to do this for your data. The mu parameter sets the hypothesis for the difference of group means. The alternative parameter lets you choose between one and two-sided tests.

t.test(weight~group,var.equal=TRUE)

        Two Sample t-test

data:  weight by group 
t = 1.1913, df = 18, p-value = 0.249
alternative hypothesis: true difference in means is not equal to 0 
95 percent confidence interval:
 -0.2833003  1.0253003 
sample estimates:
mean in group Ctl mean in group Trt 
            5.032             4.661 



t.test(weight~group,var.equal=TRUE,mu=-1)

        Two Sample t-test

data:  weight by group 
t = 4.4022, df = 18, p-value = 0.0003438
alternative hypothesis: true difference in means is not equal to -1 
95 percent confidence interval:
 -0.2833003  1.0253003 
sample estimates:
mean in group Ctl mean in group Trt 
            5.032             4.661
James
  • 65,548
  • 14
  • 155
  • 193
  • Good alternative. Note that t test is available only with one categorical variable having two levels. – kohske Nov 11 '11 at 13:00
-1

Code up your own test. You know the estimated coeffiecient and you know the standard error. You could construct your own test stat.

power
  • 1,680
  • 3
  • 18
  • 30
  • while your comment makes complete sense, you should remember that this is R. so chances that a trivial test is already implemented is extremely high, and worth checking before spending time coding it up. of course, it has its own instructional value. – Ramnath Nov 11 '11 at 05:02
  • 1
    This isn't necessarily a bad answer, if you added some example code to illustrate how one might do this. – joran Nov 11 '11 at 05:05
  • This can be trivial to code up. Check a first year econometrics textbook. – power Nov 11 '11 at 05:24