I'm running a mlr benchmark with multiple learners (around 15 different learners) with nested resampling using the irace tuning control. My question is: is it possible to run two parallelization levels on parallelMap?
If I use the mlr.benchmark level, the faster learners end firsts and only the more computational demanding learners remain running, a single thread for each. So end up with 4 maybe 5 threads running.
If I use the mlr.tuneParams level, the irace tuning control spawns 6 threads evaluates all of them and then after all of them finish, it creates 6 others. I know that this method is sequential in nature.
My point is either way the CPU cores are not fully used. For example if a CPU has 12 cores I could run two learners at same time with each learner using 6 cores for tuning.
Right now I'm doing this manually: I create multiple R sessions and run them separately.
Thanks!