0

I am able to change the coefficients of my linear model. Then i want to compare the results of my "new" model with the new coefficients, but R is not calculating the results with the new coefficients.

As you can see in my following example the summary of my models fit and fit1 are excactly the same, though results like multiple R-squared should or fitted values should change.

set.seed(2157010) #forgot set.
x1 <- 1998:2011
x2 <- x1 + rnorm(length(x1))
y <- 3*x2 + rnorm(length(x1)) #you had x, not x1 or x2
fit <- lm( y ~ x1 + x2)

# view original coefficients
coef(fit)

# generate second function for comparing results
fit1 <- fit

# replace coefficients with new values, use whole name which is coefficients:
fit1$coefficients[2:3] <- c(5, 1)

# view new coefficents
coef(fit1)

# Comparing
summary(fit)
summary(fit1)

Thanks in advance

ToniNA
  • 11
  • 2
  • 1
    Welcome to Stack Overflow! Does this answer help: https://stackoverflow.com/questions/12323859/how-to-manually-set-coefficients-for-variables-in-linear-model – Harrison Jones Oct 14 '22 at 14:49
  • 1
    Changing `fit1$coefficients` does not alter anything else in the `fit1` object, so `summary` returns the same results (other than the new coefficient values). If you run `predict(fit1)`, the altered coefficients will produce new results. I'm unclear on why you would ever want to do this, however. – jdobres Oct 14 '22 at 14:49
  • I am doing this cause I am calculating HedgeRatio´s and R-Squared shows the Efficiency of my Hedge. So i can compare the results with different HedgeRatios. Thanks for your answer, I will try it. – ToniNA Oct 14 '22 at 14:55
  • R^2 can be computed as `cor(y, predict(fit1))^2` but @jdobres is right, even after your last comment, this doesn't make any sense. Given a response and the same regressors you will always have smaller R^2 with other coefficients, OLS minimizes the mean squared error and R^2 = 1 - rss/tss. VTC. – Rui Barradas Oct 14 '22 at 15:02

1 Answers1

0

It might be easier to compute the multiple R^2 yourself with the substituted parameters.

mult_r2 <- function(beta, y, X) {
   tot_ss <- var(y) * (length(y) - 1)
   rss <- sum((y - X %*% beta)^2)
   1 - rss/tot_ss
}

(or, more compactly, following the comments, you could compute p <- X %*% beta; (cor(y,beta))^2)

mult_r2(coef(fit), y = model.response(model.frame(fit)), X = model.matrix(fit))
## 0.9931179, matches summary()

Now with new coefficients:

new_coef <- coef(fit)
new_coef[2:3] <- c(5,1)
mult_r2(new_coef, y = model.response(model.frame(fit)), X = model.matrix(fit))
## [1] -343917

That last result seems pretty wild, but the substituted coefficients are very different from the true least-squares coeffs, and negative R^2 is possible when the model is bad enough ...

Ben Bolker
  • 211,554
  • 25
  • 370
  • 453