I want to do a benchmark with different models in mlr via 3 fold cross validation. In every fold i want to do again via a 3 fold cross validation a feature selection for every model and pass the best feature set to the outer cross validation. I noticed that the result of a benchmark in MLR uses always all the features included.
How can i extract from a benchmark the features used in every fold and every model and how do i make sure they are really used for the outer Cross Validation fold?
Here is a sample code:
task_cv <- makeClassifTask(
id = 'predict future outages',
data = data,
target = 'targetVariable',
positive=1
)
vali_strat <- makeResampleDesc(method="CV",iters = 3)
featSelControl<- makeFeatSelControlSequential(same.resampling.instance = T,
method = "sbs",
tune.threshold = T,
alpha = 4,
beta = 4)
learner_nv <- makeLearner(
id = 'Naive Bayes',
cl = 'classif.naiveBayes'
)
learner_knn <- makeLearner(
id = 'KNN',
cl = 'classif.kknn'
)
featSel_nv <- makeFeatSelWrapper(learner = learner_nv,
resampling = vali_strat,
control = featSelControl,
measures = acc
featSel_knn <- makeFeatSelWrapper(learner = learner_knn,
resampling = vali_strat,
control = featSelControl,
measures = acc
learners <- list(featSel_nv,
featSel_knn )
benchmark = benchmark(
learners = learners,
tasks = task_cv,
resamplings = validation_strategy,
measures = acc
)
benchmark$results$`predict future outages`$KNN.featsel$models[[1]]$features
I cannot extract the used features and the last line in the code indicates that always all features are used instead the selected one via featureSelection.