I have run a benchmark experiment with several tasks, containing different subsets of the data, with one classifier (random forest from the package ranger
).
Now I would like to compare on significance between different models using the post hoc Friedman-Nemenyi test with the mlr
function friedmanTestBMR()
. This, however, does not work because friedmanTestBMR()
needs at least 2 classifiers. Does it have any statistical reason?
Using posthoc.friedman.nemenyi.test()
from the R-package PMCMR
(which friedmanTestBMR()
uses) works fine.
Asked
Active
Viewed 38 times
1
-
Good point; this should probably be supported. mlr is currently in maintenance-only mode with most development effort focused on mlr3, but if you want to have a go at changing this and open a pull request (https://github.com/mlr-org/mlr3/wiki/pr-guidelines) you're more than welcome (and we'll help). – Lars Kotthoff Apr 09 '20 at 15:37
-
`friedmanTestBMR` test for differences between learners and not for differences between tasks. – jakob-r Apr 14 '20 at 09:43
-
Thank's for your answers. @Lars Kotthoff: the function has an explicite if statement to end this function if n.learner < 2 therefore I assume there is a (statistical) reason for it. `if (n.learners < 2) { stop("Benchmark results for at least two learners are required") } n.tasks = length(bmr$results) if (n.tasks < 2) { stop("Benchmark results for at least two tasks are required") }` @jakob-r can you give further information why this function can be just applied to compare between learners and not between tasks (only)? What kind of function/method could I use? – Edvin Apr 15 '20 at 09:32