i try to use bfgs as a solver for the optimization problem in the mlr3 tuning step. I checked the documentation for how to add the gradient that the solver needs.
Although i am able to add it in the pure nloptr , i find no way to do this in the bbotk lib or on the mlr3 level. A minimal example shows what i mean:
library(mlr3)
library(paradox)
library(mlr3tuning)
inner_resampling = rsmp("cv", folds = 5)
terminator <- trm("evals",n_evals = 10)
tsk <- tsk("pima")
learner <- lrn("classif.rpart")
search_space <- ps(
cp = p_dbl(lower = 1e-4, upper = 0.1)
)
tuner <- tnr("nloptr",algorithm = "NLOPT_LD_LBFGS")
inst <- TuningInstanceSingleCrit$new(
task = tsk,
learner = learner,
resampling = inner_resampling,
terminator = terminator
search_space = search_space,
measure = msr("classif.ce")
)
tuner$optimize(inst)
The result is:
Error in is.nloptr(ret) :
A gradient for the objective function is needed by algorithm NLOPT_LD_LBFGS but was not supplied.
When choosing a gradient free algorithm (for example NLOPT_LN_BOBYQA), everything works fine.
My question now: Is this generally possible? Or do gradient based algorithms not work on bbotk abstraction level and above? I try to check the code (as far as possible for me :-) ) but i found no slot for adding the gradient function.
Thanks in advance Peter