Questions tagged [ray-tune]

72 questions
2
votes
3 answers

Save episode rewards in ray.tune

I am training several agents with PPO algorithms in a multi-agent environment using rllib/ray. I am using the ray.tune() command to train the agents and then loading the training data from ~/ray_results. This data contains the actions chosen by the…
mat123a
  • 21
  • 1
2
votes
0 answers

Hyperparameter optimization in pytorch (currently with sklearn GridSearchCV)

I use this(link) pytorch tutorial and wish to add the grid search functionality in it ,sklearn.model_selection.GridSearchCV (link), in order to optimize the hyper parameters. I struggle in understanding what X and Y in gs.fit(x,y) should be; per the…
BFH
  • 23
  • 1
  • 7
1
vote
1 answer

Gekko solutions not found while trying to implement an elastic net regression

I am presently trying to build a elastic net regression model by using Gekko. I'm using Gekko instead of sklearn etc. because I'd also need to implement additional constraints on my variable coefficients. And the Gekko code works if I omit the…
1
vote
1 answer

How to end episodes after 200 steps in Ray Tune (tune.run()) using a PPO model with torch

I'm using the following code to import a custom environement and then train on it: from ray.tune.registry import register_env import ray from ray import air, tune from ray.rllib.algorithms.ppo import PPO from gym_env.cube_env import…
Dawid
  • 13
  • 2
1
vote
0 answers

Ray cluster GPU detection

I am trying to do distributed HPO on a Slurm cluster but ray does not detect the GPUs correctly. I have a head node with only CPUs that is only supposed to run the schduler, and X identical workers nodes with 4 GPUs each, but ray only detects the…
Wacken0013
  • 11
  • 3
1
vote
0 answers

Is there a way to use Ray Tune in combination with ML Flow and Hydra in Python?

I want to do hyperparameter tuning for a neural net, created with keras. For this project I handle my config.yaml files with hydra, use mlflow to store the metrics and parameters from the optimization and use ray to parallelize the computation of…
Patrick
  • 11
  • 2
1
vote
0 answers

What is the best way to add a custom defined scheduler in RayTune?

I want to try some scheduler for ASA and HPO that is not implemented on RayTune so I was wondering if there is any way to do that as I could not find information about that on the documentation.
Pepe
  • 112
  • 1
  • 9
1
vote
0 answers

Memory usage on node keeps increasing while training a model with Ray Tune

This is the first time I am using Ray Tune to look for the best hyperparameters for an DL model and I am experiencing some problems related to memory usage. The Memory usage on this node keeps increasing which lead to an error of the trial run.…
1
vote
0 answers

Is it possible to disable tensorboard in raytune?

I'm learning how to use ray.tune to optimize my models. Each execution logs results in tensorboard by default. I don't have it installed and I'm not using it. But is there a way to disable it, running all the executions without logging?
Rafael Higa
  • 655
  • 1
  • 8
  • 17
1
vote
0 answers

Which configuration leads to Invalid beta parameter: 1.1203716642316337 - should be in [0.0, 1.0) error?

I am running a hyperparameter tuning using Ray Tune integration (1.9.2) and hugging face transformers framework (4.15.0). This is the code that is responsible for the procedure (based on this example): def search_hyper_parameters( trainer:…
1
vote
0 answers

Issues running ray (tune) on an SGE cluster

I'm trying to run ray.tune on an SGE cluster to optimise hyperparameters for a model in pytorch. The code runs fine on my laptop (but runs out of memory - hence why I want to use the cluster) but when running on my university cluster I get an…
Dom Byrne
  • 9
  • 1
1
vote
1 answer

Best Config after Hyperparameter Search with Ray Tune

I just run my first Ray Tune. I got nice terminal output an all that but now I'm wondering: Which configuration gave me the best score? I see that there are a ton of results files but is there an easy way to get the best config?
xotix
  • 494
  • 1
  • 13
  • 41
1
vote
2 answers

Obtaining different set of configs across multiple calls in ray tune

I am trying to make my code reproducible. I have already added np.random.seed(...) and random.seed(...), and at the moment I am not using pytorch or tf, therefore no scheduler or searcher can introduce any random issue. The set of configs produced…
Roxana
  • 392
  • 1
  • 3
  • 12
1
vote
0 answers

Ray Tune random search indefinitely many samples

Code sample to illustrate the issue: from ray import tune def objective(step, alpha, beta): return (0.1 + alpha * step / 100)**(-1) + beta * 0.1 def training_function(config): # Hyperparameters alpha, beta = config["alpha"],…
ptyshevs
  • 1,602
  • 11
  • 26
1
vote
1 answer

Add More Metrics to Ray Tune Status Table (Python, PyTorch)

When running tune.run() on a set of configs to search, is it possible to add more metrics columns (i.e. a, b, etc) to the status table being printed out? tune.track.log(a=metric1, b=metric2) will give the following table without columns for the…
Nyxynyx
  • 61,411
  • 155
  • 482
  • 830