I want to tune an xgboost
learner and set the parameter early_stopping_rounds
to 10% of the parameter nrounds
(whichever is generated each time).
Should be a simple thing to do in general (i.e. tuning a parameter relative to another) but I can't make it work, see example below:
library(mlr3verse)
#> Loading required package: mlr3
learner = lrn('surv.xgboost', nrounds = to_tune(50, 5000),
early_stopping_rounds = to_tune(ps(
a = p_int(10,5000), # had to put something in here, `early_stopping_rounds` also doesn't work
.extra_trafo = function(x, param_set) {
list(early_stopping_rounds = ceiling(0.1 * x$nrounds))
}, .allow_dangling_dependencies = TRUE)))
#> Error in self$assert(xs): Assertion on 'xs' failed: early_stopping_rounds: tune token invalid: to_tune(ps(a = p_int(10, 5000), .extra_trafo = function(x, param_set) { list(early_stopping_rounds = ceiling(0.1 * x$nrounds)) }, .allow_dangling_dependencies = TRUE)) generates points that are not compatible with param early_stopping_rounds.
#> Bad value:
#> numeric(0)
#> Parameter:
#> id class lower upper levels default
#> 1: early_stopping_rounds ParamInt 1 Inf .
# this works though:
pam = ps(z = p_int(-3,3), x = p_int(0,10),
.extra_trafo = function(x, param_set) {
x$z = 2*(x$x) # overwrite z as 2*x
x
})
dplyr::bind_rows(generate_design_random(pam, 5)$transpose())
#> # A tibble: 5 × 2
#> z x
#> <dbl> <int>
#> 1 2 1
#> 2 14 7
#> 3 8 4
#> 4 12 6
#> 5 20 10
Created on 2022-08-29 by the reprex package (v2.0.1)