Code sample to illustrate the issue:
from ray import tune
def objective(step, alpha, beta):
return (0.1 + alpha * step / 100)**(-1) + beta * 0.1
def training_function(config):
# Hyperparameters
alpha, beta = config["alpha"], config["beta"]
for step in range(10):
# Iterative training function - can be any arbitrary training procedure.
intermediate_score = objective(step, alpha, beta)
# Feed the score back back to Tune.
tune.report(mean_loss=intermediate_score)
analysis = tune.run(
training_function,
config={
"alpha": tune.grid_search([0.001, 0.01, 0.1]),
"beta": tune.choice(list(range(10000)))
},
num_samples=1000000)
Problem I face is that tune.run
call will mandatory sample search space num_samples
times before starting to execute the first trial.
Question: is it possible to make Tune sample search space after each trial?
It is possible to limit number of concurrent trials for tune.suggest.Searcher
-descendant algorithms (AxSearch
, for example), using ConcurrencyLimiter
wrapper around search algorithm. But how can I do this for random search?