Questions tagged [mlr]

mlr is a machine learning package for R that provides an interface to many other packages.

mlr is an R package that provides a standardized API to many of R's machine learning packages. On top of that, it offers resampling, feature selection, automatic tuning, cost-sensitive learning and much more. Its website can be found at https://github.com/mlr-org/mlr/

328 questions
1
vote
2 answers

mlr equivalent of carets model selectionFunction in R

The caret library in R has a hyper-parameter 'selectionFunction' inside trainControl(). It's used to prevent over-fitting models using Breiman's one standard error rule, or tolerance, etc. Does mlr have an equivalent? If so, which function is it…
Brad
  • 580
  • 4
  • 19
1
vote
1 answer

Running Random Search in mlr R package on Ubuntu 18.04 takes too long

I have a problem when I search for optimal hyperparameters of xgboost using mlr package in R, using Random Search method, on Ubuntu 18.04. This is the setup code for the search: eta_value <- 0.05 set.seed(12345) # 2. Create tasks train.both$y <-…
Corel
  • 581
  • 3
  • 21
1
vote
2 answers

R - mlr - What is the difference between Benchmark and Resample when searching for hyperparameters

I'm searching for the optimum hyper parameters settings and i realise i can do that in both ways in MLR. benchmark function, and resample function. What is the difference between the two? If i were to do it via benchmark, i can compare multiple…
Choc_waffles
  • 518
  • 1
  • 4
  • 15
1
vote
1 answer

How to determine which fold was finally used as a test in CV?

How can I determine which fold was finally used as a test and which fold as training in 5 fold crossvalidation in the mlr package? Methods $resampling$train.inds and $resampling$test.inds returns all 5 folds without the information that eventually…
lodomi
  • 67
  • 3
1
vote
1 answer

mlr - parameter name clash with randomForestSRC_var.select filter using method argument

When I use the randomForestSRC_var.select filter and pass a method parameter to it (e.g. method="vh" for variable hunting) I get a name clash because an internal function also uses a parameter called method. This was raised as an issue on Github,…
panda
  • 821
  • 1
  • 9
  • 20
1
vote
1 answer

mlr: Filter Methods with Tuning

This section of the ml tutorial: https://mlr.mlr-org.com/articles/tutorial/nested_resampling.html#filter-methods-with-tuning explains how to use a TuneWrapper with a FilterWrapper to tune the threshold for the filter. But what if my filter has…
panda
  • 821
  • 1
  • 9
  • 20
1
vote
0 answers

Silent crash only when using parallelMap in mlr

I am running an mlr benchmark with about 12 learners. My code runs without any problem when I do not use parallelMap, but as soon as I add parallelization it crashes silently, always at the same point, even with only 2 cores. I thought it must be…
panda
  • 821
  • 1
  • 9
  • 20
1
vote
1 answer

mlr: What is the best way to test for a FailureModel?

The mlr function configureMlr() allows users to set the following parameter: on.learner.error: What should happen if an error in an underlying learning algorithm is caught “warn”: A FailureModel will be created, which predicts only NAs and a warning…
panda
  • 821
  • 1
  • 9
  • 20
1
vote
1 answer

specify `makeNumericVectorParam` for `hidden_dropout_ratios` hyper parameter which would depend on the number of hidden layers

I would like to tune "classif.h2o.deeplearning" learner via mlr. During the tuning I have several architectures I would like explored. For each of these architectures I would like to specify a dropout space. However I am struggling with…
missuse
  • 19,056
  • 3
  • 25
  • 47
1
vote
1 answer

Why do I get different performance metrics in mlr package when I run the same model twice?

I get two different performance metrics when I run this code two times in a row? and I'm not sure I understand why this is happening as I'm using the same training and testing set. I'm setting the seed in the beginning as well.…
Ashti
  • 193
  • 1
  • 10
1
vote
0 answers

mlr: makeStackedLearner for survival models

makeStackedLearner in mlr is only available for regression, classification and multi-label classification models. Is there any reason why it could not be applied to survival models, perhaps for example, for a simple averaging of the results of…
panda
  • 821
  • 1
  • 9
  • 20
1
vote
1 answer

Wrong results of noisy optimization in r and mlrMBO

i have a noisy optimization task. I try to find parameters of a function (which assets to choose and their weights) so that they will minimize function result (tracking error - a difference between portfolio and index returns). Financial terms are…
SquintRook
  • 13
  • 3
1
vote
1 answer

mlr: nested resampling for feature selection

I am running a benchmark experiment in which I tune a filtered learner, similar to the example given in the mlr tutorial under nested resampling and titled "Example 3: One task, two learners, feature filtering with tuning". My code is as…
panda
  • 821
  • 1
  • 9
  • 20
1
vote
1 answer

mlr: retrieve output of generateFilterValuesData within CV loop

If I fuse a learner with a filter method using makeFilterWrapper, then I know I can perform feature selection using that filter within a cross-validation loop. As I understand it, filterFeatures is called before each model fit and it calls…
panda
  • 821
  • 1
  • 9
  • 20
1
vote
1 answer

parameter tuning output NA

I am new to parameter tuning with mlr package. I recently tried it with xgboost algorithm on a binary classification problem. I couldn’t get the trained accuracy, only NA. After google round, I was not able to debug my code. Could you give me some…
user11806155
  • 121
  • 5