3

What is the most practical way of extracting the global p-value of a linear model, lm? I usually end up taking the results from summary and plugging the F-test statistic and degrees of freedom into pf:

set.seed(1)
n <- 10
x <- 1:10
y <- 2*x+rnorm(n)
fit <- lm(y ~ x)
summary(fit) # global p-value: 1.324e-08
fstat <- summary(fit)$fstat
pval <- pf(fstat[1], fstat[2], fstat[3], lower.tail = FALSE)
pval
Marc in the box
  • 11,769
  • 4
  • 47
  • 97
  • possible duplicate of [pull out p-values and r-squared from a linear regression](http://stackoverflow.com/questions/5587676/pull-out-p-values-and-r-squared-from-a-linear-regression) – zx8754 Apr 17 '15 at 13:54
  • Since this is basically what `stats:::print.summary.lm` does, I believe that this is the way. – Roland Apr 17 '15 at 14:00
  • 2
    You question title asks about efficiency: You can achieve better performance by not using `summary` or `lm` and calculating the F-stats yourself. – Roland Apr 17 '15 at 14:02
  • 1
    I actually meant "practical" or direct. I guess, given the responses here, one must calculate via `pf`. There seems to be no direct way of extracting from the model object or via `summary`. – Marc in the box Apr 17 '15 at 14:21
  • 1
    @Roland - if you want to provide an answer that avoids `lm` and `summary` I (and I'm sure others) would be interested to see it. – Marc in the box Apr 17 '15 at 15:52
  • This is not a duplicate of [that](http://stackoverflow.com/questions/5587676/pull-out-p-values-and-r-squared-from-a-linear-regression) "how to" question. It rather looks for more direct way to extract the value from the fitted lm model object. – Marc in the box Apr 18 '15 at 06:26

3 Answers3

5

Check out the broom package:

library(broom)

set.seed(1)
n <- 10
x <- 1:10
y <- 2*x+rnorm(n)
fit <- lm(y ~ x)

glance(fit)
#   r.squared adj.r.squared     sigma statistic      p.value df    logLik      AIC      BIC deviance df.residual
# 1 0.9851881     0.9833366 0.8090653  532.1048 1.324022e-08  2 -10.95491 27.90982 28.81758 5.236693           8

glance(fit)$p.value
# [1] 1.324022e-08

tidy(fit)
#          term   estimate  std.error  statistic      p.value
# 1 (Intercept) -0.1688236 0.55269681 -0.3054542 7.678170e-01
# 2           x  2.0547321 0.08907516 23.0673979 1.324022e-08
JasonAizkalns
  • 20,243
  • 8
  • 57
  • 116
2

Since you asked for it:

Here is a bare-bones implementation that omits the bells and whistles (and checks) of lm. As a consequence it is faster, but you'd use it at your own risk, i.e., the warnings in help("lm.fit") apply. Due to laziness, code for calculation of the F-stats was extracted from the summary.lm source code and only slightly amended (so please consider licence() and citation("stats")).

fit1 <- lm.fit(cbind(1, x), y)

fstats <- function(obj) {
  p <- obj$rank
  rdf <- obj$df.residual
  r <- obj$residuals
  f <- obj$fitted.values
  mss <-  sum((f - mean(f))^2)
  rss <- sum(r^2)
  resvar <- rss/rdf
  df.int <- 1L #assumes there is always an intercept
  fstatistic <- c(value = (mss/(p - df.int))/resvar, 
                      numdf = p - df.int, dendf = rdf)
  fstatistic["pval"] <- pf(fstatistic[1L], 
                           fstatistic[2L], 
                           fstatistic[3L], lower.tail = FALSE)
  fstatistic
}

fstats(fit1)
#       value        numdf        dendf         pval 
#5.321048e+02 1.000000e+00 8.000000e+00 1.324022e-08 
Roland
  • 127,288
  • 10
  • 191
  • 288
0

Check the source of print.summary.lm, it uses the pf function to get the pvalue.

 format.pval(pf(x$fstatistic[1L], 
            x$fstatistic[2L], x$fstatistic[3L], lower.tail = FALSE), 
            digits = digits))
Kyle Balkissoon
  • 348
  • 1
  • 3
  • 13