2

i try to use bfgs as a solver for the optimization problem in the mlr3 tuning step. I checked the documentation for how to add the gradient that the solver needs.

Although i am able to add it in the pure nloptr , i find no way to do this in the bbotk lib or on the mlr3 level. A minimal example shows what i mean:

library(mlr3)
library(paradox)
library(mlr3tuning)

inner_resampling = rsmp("cv", folds = 5)
terminator <- trm("evals",n_evals = 10)
tsk <- tsk("pima")
learner <- lrn("classif.rpart")

search_space <- ps(
   cp = p_dbl(lower = 1e-4, upper = 0.1)
)

tuner <- tnr("nloptr",algorithm = "NLOPT_LD_LBFGS")
inst <- TuningInstanceSingleCrit$new(
                                    task = tsk,
                                    learner = learner,
                                    resampling = inner_resampling,
                                    terminator = terminator
                                    search_space = search_space,
                                    measure = msr("classif.ce")
                                    )
tuner$optimize(inst)

The result is:

Error in is.nloptr(ret) : 
  A gradient for the objective function is needed by algorithm NLOPT_LD_LBFGS but was not supplied.

When choosing a gradient free algorithm (for example NLOPT_LN_BOBYQA), everything works fine.

My question now: Is this generally possible? Or do gradient based algorithms not work on bbotk abstraction level and above? I try to check the code (as far as possible for me :-) ) but i found no slot for adding the gradient function.

Thanks in advance Peter

Peter M.
  • 21
  • 2

1 Answers1

1

There are no gradients in this kind of black-box optimization. While in principle you could empirically determine gradients, that would go against the spirit of trying to achieve performance improvements with as few evaluations as possible.

There are no plans to support gradients for tuning in mlr3. Of course, if you're interested in this, you're welcome to contribute :)

Lars Kotthoff
  • 107,425
  • 16
  • 204
  • 204