1

Code sample to illustrate the issue:

from ray import tune


def objective(step, alpha, beta):
    return (0.1 + alpha * step / 100)**(-1) + beta * 0.1


def training_function(config):
    # Hyperparameters
    alpha, beta = config["alpha"], config["beta"]
    for step in range(10):
        # Iterative training function - can be any arbitrary training procedure.
        intermediate_score = objective(step, alpha, beta)
        # Feed the score back back to Tune.
        tune.report(mean_loss=intermediate_score)


analysis = tune.run(
    training_function,
    config={
        "alpha": tune.grid_search([0.001, 0.01, 0.1]),
        "beta": tune.choice(list(range(10000)))
    },
    num_samples=1000000)

Problem I face is that tune.run call will mandatory sample search space num_samples times before starting to execute the first trial.

Question: is it possible to make Tune sample search space after each trial?

It is possible to limit number of concurrent trials for tune.suggest.Searcher-descendant algorithms (AxSearch, for example), using ConcurrencyLimiter wrapper around search algorithm. But how can I do this for random search?

ptyshevs
  • 1,602
  • 11
  • 26
  • RayTune currently auto-generates all of the samples ahead of time. Could you post an issue on Github? This feature request makes sense. – richliaw Aug 15 '20 at 19:38
  • I've created a workaround by using custom Searcher and Concurrency limited on top of it. I think once it's ready, I'll post an issue together with PR and update this question. – ptyshevs Aug 16 '20 at 03:55

0 Answers0