0

My overall goal is to determine variable importance from a Superlearner as performed on the Boston dataset. However, when I attempt to determine the variable importance using the VIP package in R, I receive the error below. My suspicion is that the prediction wrapper containing the SuperLeaner object is the cause of the error, but I am by no means sure.

# Call:  
# SuperLearner(Y = y_train, X = x_train, family = binomial(), SL.library =  # c("SL.mean",  
#    "SL.glmnet", "SL.ranger"), method = "method.AUC") 


#                    Risk      Coef
# SL.mean_All   0.55622189 0.3333333
# SL.glmnet_All 0.06240630 0.3333333
# SL.ranger_All 0.02745502 0.3333333
# Error in mean(actual == predicted, na.rm = FALSE): (list) object cannot be # coerced to type 'double'
# Traceback:

# 1. vi_permute(object = sl, method = "permute", feature_names = colnames, 
#  .     train = x_train, target = y_holdout, metric = "accuracy", 
#  .     type = "difference", nsim = 1, pred_wrapper = pred_wrapper)
# 2. vi_permute.default(object = sl, method = "permute", feature_names =    
#       colnames, 
#  .     train = x_train, target = y_holdout, metric = "accuracy", 
#  .     type = "difference", nsim = 1, pred_wrapper = pred_wrapper)
# 3. mfun(actual = train_y, predicted = pred_wrapper(object, newdata =  
#     train_x))
# 4. mean(actual == predicted, na.rm = FALSE)

I have performed the following:

library(MASS)
data(Boston, package = "MASS")

# Extract our outcome variable from the dataframe.
outcome = Boston$medv

# Create a dataframe to contain our explanatory variables.
data = subset(Boston, select = -medv)

set.seed(1)
# Reduce to a dataset of 150 observations to speed up model fitting.
train_obs = sample(nrow(data), 150)

# X is our training sample.
x_train = data[train_obs, ]

# Create a holdout set for evaluating model performance.
x_holdout = data[-train_obs, ]

# Create a binary outcome variable: towns in which median home value is > 22,000.
outcome_bin = as.numeric(outcome > 22)

y_train = outcome_bin[train_obs]
y_holdout = outcome_bin[-train_obs]

library(SuperLearner)
set.seed(1)
sl = SuperLearner(Y = y_train, X = x_train, family = binomial(),
  SL.library = c("SL.mean", "SL.glmnet", "SL.ranger"), method = "method.AUC")
sl

colnames <- colnames(x_train)
pred_wrapper <- function(sl, newdata) {
  predict(sl, x = as.matrix(y_holdout)) %>%
    as.vector()
}

# Plot VI scores
library(vip)
p1 <- vi_permute(object = sl, method = "permute", feature_names = colnames, train = x_train, 
          target = y_holdout,
          metric = "accuracy",
          type = "difference", 
          nsim = 1,
          pred_wrapper = pred_wrapper) 
jay.sf
  • 60,139
  • 8
  • 53
  • 110
Mark
  • 25
  • 4

1 Answers1

0

For the SuperLearner object, you can see it returns a list of probabilities

predict(sl,x_train[1:2,])
$pred
          [,1]
[1,] 0.4049966
[2,] 0.1905551

$library.predict
     SL.mean_All SL.glmnet_All SL.ranger_All
[1,]   0.3866667     0.5718232        0.2565
[2,]   0.3866667     0.1082986        0.0767

If you read the vignetter (?predict.SuperLearner), I guess you want the prediction from the superlearner. So change the function to pull out the probabilities and convert them to labels:

pred_wrapper <- function(sl, newdata) {
  ifelse(predict(sl,newdata)$pred>0.5,1,0)
}

We briefly check the result:

table(pred_wrapper(sl,x_holdout),y_holdout)
   y_holdout
      0   1
  0 183  39
  1   9 125

And use x_holdout as train:

p1 <- vi_permute(object = sl, method = "permute", feature_names = colnames, train = x_holdout, 
          target = y_holdout,
          metric = "accuracy",
          type = "difference", 
          nsim = 5,
          pred_wrapper = pred_wrapper) 

# A tibble: 13 x 3
   Variable Importance   StDev
   <chr>         <dbl>   <dbl>
 1 crim       0.00337  0.00126
 2 zn        -0.000562 0.00235
 3 indus      0.00337  0.00235
 4 chas       0.00674  0.00377
 5 nox        0.00225  0.00235
 6 rm         0.0315   0.0165 
 7 age        0.0213   0.0108 
 8 dis        0.0129   0.00944
 9 rad       -0.00169  0.00377
10 tax        0.00506  0.00126
11 ptratio    0.0174   0.0145 
12 black     -0.00281  0      
13 lstat      0.241    0.0204
StupidWolf
  • 45,075
  • 17
  • 40
  • 72
  • For this portion of the code "$pred>0.5,1,0", why did you decide to use these cutoffs? Perhaps asked in a different way, is there a "gold standard" method to determine these cutoffs? I have read that these should be determined via cross validation, but I am unsure. – Mark Sep 05 '20 at 23:44
  • No you don't. CV is for optimizing hyper parameters. In any case, you need to convert the probability to a label for the other functions to work. You have a programming question about the permutation importance and have I solved the problem? – StupidWolf Sep 06 '20 at 08:47
  • If there is a separate question, you can always post it again on SO – StupidWolf Sep 06 '20 at 08:48