1

I am trying to plot the Pareto front of a TuneMultiCritResult object, tuned with a control object of class TuneMultiCritControlMBO:

# multi-criteria optimization of (tpr, fpr) with MBO
lrn =  makeLearner("classif.ksvm")
rdesc = makeResampleDesc("Holdout")
ps = makeParamSet(
  makeNumericParam("C", lower = -12, upper = 12, trafo = function(x) 2^x),
  makeNumericParam("sigma", lower = -12, upper = 12, trafo = function(x) 2^x)
)
ctrl = makeTuneMultiCritControlMBO()
res = tuneParamsMultiCrit(lrn, sonar.task, rdesc, par.set = ps,
  measures = list(tpr, fpr), control = ctrl)

Printing the object res gives the following:

> res
Tune multicrit result:
Points on front: 14
> res$ind
 [1]  1  2  4  5  6  7  9 11 12 14 15 16 17 18

But the length of the optimization path saved in res$opt.path only has 10 points, the ones proposed by MBO I guess.

> res$opt.path
Optimization path
  Dimensions: x = 2/2, y = 2
  Length: 10
  Add x values transformed: FALSE
  Error messages: TRUE. Errors: 0 / 10.
  Exec times: TRUE. Range: 0.031 - 0.041. 0 NAs.

Since the function plotTuneMultiCritResult relies on the objects res$ind and res$opt.path to print the front, it shows weird results.

I think that the correct way to go is to copy the optimization path of the object res$mbo.result$opt.path into res$opt.path, but my question is: What's the point of having different optimization paths in res$opt.path and res$mbo.result$opt.path?

Thanks!! Víctor

lordbitin
  • 185
  • 1
  • 9
  • I posted the working code below but here are some comments on your points: What is weird about the plot of `plotTuneMultiCritResult`? The opt.path inside of `res$mbo.result` is the optimization path used inside of mlrMBO and is not needed in mlr. It stores some information useful for analyzing the MBO part. It just happens to be the same object because mlr and mlrMBO are developed from the same guys. – jakob-r Jan 07 '19 at 09:24
  • For some reason I had some points in the Pareto Front (`res$ind`) out of length of the optimization path `res$opt.path`. I'll be back if I experience the same problem again. – lordbitin Jan 07 '19 at 20:23

1 Answers1

1

Using mlr_2.13 and mlrMBO_1.1.3 and the following code everything works like expected. I suggeset that you use the MBO Control object to specify how much iterations your optimization should have. Otherwise a default (4*d evaluations for the initial design + 10 iterations) will be used.

set.seed(1)
library(mlr)
library(mlrMBO)
# multi-criteria optimization of (tpr, fpr) with MBO
lrn =  makeLearner("classif.ksvm")
rdesc = makeResampleDesc("Holdout")
ps = makeParamSet(
  makeNumericParam("C", lower = -12, upper = 12, trafo = function(x) 2^x),
  makeNumericParam("sigma", lower = -12, upper = 12, trafo = function(x) 2^x)
)
mbo.ctrl = makeMBOControl(n.objectives = 2)
mbo.ctrl = setMBOControlTermination(mbo.ctrl, iters = 20)
ctrl = makeTuneMultiCritControlMBO(n.objectives = 2)
res = tuneParamsMultiCrit(lrn, sonar.task, rdesc, par.set = ps,
                          measures = list(tpr, fpr), control = ctrl)
plotTuneMultiCritResult(res = res, path = FALSE) # path = FALSE would only shows the Pareto Front
jakob-r
  • 6,824
  • 3
  • 29
  • 47
  • Thank you for the advice of setting the number of iterations manually. I am not sure why but now I cannot reproduce the error I was finding before...I do not know what I was doing wrong, but you are right, the code above works. Thanks!! – lordbitin Jan 07 '19 at 20:21