0

I am working with a regression, say:

df <- data.frame(x1=c(1, 2, 4, 5, 5, 6, 6, 7, 8, 10, 11, 11, 12, 12, 14),
                x2=c(1, 2, 3, 6, 15, 10, 11, , 29, 12, 11, 3, 34, 27, 4),
                y=c(64, 66, 76, 73, 74, 81, 83, 82, 80, 88, 84, 82, 91, 93, 89))

attach(df)
model <- lm(y~x1+x2)

Which looks like:

> summary(model)

Call:
lm(formula = y ~ x1 + x2)

Residuals:
   Min     1Q Median     3Q    Max 
-4.062 -2.478 -1.309  2.787  5.649 

Coefficients:
            Estimate Std. Error t value          Pr(>|t|)    
(Intercept) 64.77697    1.96609  32.947 0.000000000000387 ***
x1           1.77351    0.25602   6.927 0.000015886638034 ***
x2           0.17577    0.09692   1.814            0.0948 .  
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 3.357 on 12 degrees of freedom
Multiple R-squared:  0.8673,    Adjusted R-squared:  0.8452 
F-statistic: 39.23 on 2 and 12 DF,  p-value: 0.000005451

I then want to use predict(model) to compute prediction values given new data. However, in some cases, it will be meaningful to set some coefficients equal to zero. Here, assume that I want to produce a modified model, say model0 such that:

model0$coefficients
(Intercept)          x1          x2 
 64.7769674           0   0.1757696 

My question is if there is a recommended way to do so? I understand this may be equivalent to asking what is the best way to modify the element of a sublist, but would like to know your take.

0 Answers0