I'm trying to run a linear regression in Python that I have already done in R in order to find variables with 0 coefficients. The issue I'm running into is that the linear regression in R returns NAs for columns with low variance while the scikit learn regression returns the coefficients. In the R code, I find and save these variables by saving the variables with NAs as output from the linear regression, but I can't seem to figure out a way to mimic this behavior in python. The code I'm using can be found below.
R Code:
a <- c(23, 45, 546, 42, 68, 15, 47)
b <- c(1, 2, 4, 6, 34, 2, 8)
c <- c(22, 33, 44, 55, 66, 77, 88)
d <- c(1, 1, 1, 1, 1, 1, 1)
e <- c(1, 1, 1, 1, 1, 1, 1.1)
f <- c(1, 1, 1, 1, 1, 1, 1.01)
g <- c(1, 1, 1, 1, 1, 1, 1.001)
df <- data.frame(a, b, c, d, e, f, g)
var_list = c('b', 'c', 'd', 'e', 'f', 'g')
target <- temp_dsin.df$a
reg_data <- cbind(target, df[, var_list])
if (nrow(reg_data) < length(var_list)){
message(paste0(' WARNING: Data set is rank deficient. Result may be doubtful'))
}
reg_model <- lm(target ~ ., data = reg_data)
print(reg_model$coefficients)
#store the independent variables with 0 coefficients
zero_coef_IndepVars.v <- names(which(is.na(reg_model$coefficients)))
print(zero_coef_IndepVars.v)
Python Code:
import pandas as pd
from sklearn import linear_model
a = [23, 45, 546, 42, 68, 15, 47]
b = [1, 2, 4, 6, 34, 2, 8]
c = [22, 33, 44, 55, 66, 77, 88]
d = [1, 1, 1, 1, 1, 1, 1]
e = [1, 1, 1, 1, 1, 1, 1.1]
q = [1, 1, 1, 1, 1, 1, 1.01]
f = [1, 1, 1, 1, 1, 1, 1.001]
df = pd.DataFrame({'a': a,
'b': b,
'c': c,
'd': d,
'e': e,
'f': q,
'g': f})
var_list = ['b', 'c', 'd', 'e', 'f', 'g']
# build linear regression model and test for linear combination
target = df['a']
reg_data = pd.DataFrame()
reg_data['a'] = target
train_cols = df.loc[:,df.columns.str.lower().isin(var_list)]
if reg_data.shape[0] < len(var_list):
print(' WARNING: Data set is rank deficient. Result may be doubtful')
# Create linear regression object
reg_model = linear_model.LinearRegression()
# Train the model using the training sets
reg_model.fit(train_cols , reg_data['a'])
print(reg_model.coef_)
Output from R:
(Intercept) b c d e f g
537.555988 -0.669253 -1.054719 NA -356.715149 NA NA
> print(zero_coef_IndepVars.v)
[1] "d" "f" "g"
Output from Python:
b c d e f g
[-0.66925301 -1.05471932 0. -353.1483504 -35.31483504 -3.5314835]
As you can see, the values for columns 'b', 'c', and 'e' are close, but very different for 'd', 'f', and 'g'. For this example regression, I would want to return ['d', 'f', 'g'] as their outputs are NA from R. The issue is that the sklearn linear regression returns 0 for col 'd', while it returns -35.31 for col 'f' and -3.531 for col 'g'.
Does anyone know how R decides on whether to return NA or a value/how to implement this behavior into the Python version? Knowing where the differences stem from will likely help me implement the R behavior in python. I need the results of the python script to match the R outputs exactly.