0

I want to setup a benchmark design with graph learners. From the book, I learned that with predefined learners I can do something like this:

learners = c("classif.featureless", "classif.rpart", "classif.ranger", "classif.kknn")
learners = lapply(learners, lrn,
  predict_type = "prob", predict_sets = c("train", "test"))

# compare via 3-fold cross validation
resamplings = rsmp("cv", folds = 3)

# create a BenchmarkDesign object
design = benchmark_grid(tasks, learners, resamplings)
print(design

Now, my graph learners are defined like this and differ only by the parameter FRacPar

gr_knn_pca = po("pca", center=TRUE, scale.=TRUE) %>>%
  po("filter", filter = mlr3filters::flt("variance"), filter.frac = FRacPar) %>>%
  po(lrn("classif.kknn", predict_type = "prob"),
     param_vals = list(k = k_chosen, distance=distance_chosen, kernel='rectangular' ))

I would like something similar to the first chunk, so I can setup a benchmark. My input would be a vector of fractions, e.g. FRacPar_values=c(0.1,0.2,0.5,1)

Hoe can I proceed here?

ds_col
  • 129
  • 10
  • 1
    I would simply create a function that takes a parameter value and returns the corresponding learner. Then `lapply` across the list of parameter values. – Lars Kotthoff Apr 16 '21 at 18:52
  • 2
    Maybe what you want to do is [tuning](https://mlr3book.mlr-org.com/tuning.html), for example using a [`tnr("design_points")`](https://mlr3tuning.mlr-org.com/reference/mlr_tuners_design_points.html) to try out the different values? This doesn't give you a list of `GraphLearner`s directly, but it *does* give you a `BenchmarkResult` recording the performance values of the different configurations. – mb706 Apr 17 '21 at 09:18

0 Answers0