Ray RLlib is an open-source Python library for Reinforcement Learning. Use with applicable framework tags, such as TensorFlow or PyTorch.
Questions tagged [rllib]
105 questions
0
votes
1 answer
open file inside Ray
I'm using RAY, and created a custom env.
However, the custom env needs to open a file, and ray creates workers in a different location.
Therefore, I can't access the file.
when printing the worker location I'm getting :…

Guy-Arieli
- 33
- 7
0
votes
1 answer
ValueError: RolloutWorker has no input_reader object
I am using RLlib and I am trying to run APEX_DDPG with tune on a multi-agent environment with Ray v1.10 on Python 3.9.6.
I get the following error:
raise ValueError("RolloutWorker has no input_reader object! "
ValueError: RolloutWorker has no…

am mohi
- 1
0
votes
1 answer
Printing model summaries for rllib models
I have not seen anything in the rllib documentation that would allow me to print a quick summary of the model like print(model.summary()) in keras. I tried using tf-slim and
variables =…

Mandias
- 742
- 5
- 17
0
votes
1 answer
Neural Network Outputs in RLLIB PPO Algorithm
I want to ask, how is the neural network output of a policy for a continuous action space organized?
I know that the output in PPO has mean and std. dev. value of the given actions.
However, how is this organized?
For example, the agent has 2…

Wadhah
- 1
- 1
0
votes
1 answer
Not getting any results for Ray Rllib in Google colab even if memory is allocated
I am trying follow this tutorial
1: https://github.com/anyscale/academy/blob/main/ray-rllib/02-Introduction-to-RLlib.ipynb. But when I am implementing it on Google colab, I am not getting any results. It is only showing that the trial pending and…

A_the_kunal
- 59
- 2
- 8
0
votes
1 answer
Extract agent from ray.tune
I have been using azure machine learning to train a reinforcement learning agent using ray.tune.
My training function is as follows:
tune.run(
run_or_experiment="PPO",
config={
"env": "Battery",
"num_gpus"…

CarterB
- 502
- 1
- 3
- 13
0
votes
1 answer
Does RLlib `rollout.py` work for evaluation?
TL;DR: RLlib's rollout command seems to be training the network, not evaluating.
I'm trying to use Ray RLlib's DQN to train, save, and evaluate neural networks on a custom made simulator. To do so, I've been prototyping the workflow with OpenAI…

Kai Yun
- 97
- 8
0
votes
1 answer
Policy network of PPO in Rllib
I want to set "actor_hiddens" a.k.a the hidden layers of the policy network of PPO in Rllib, and be able to set their weights. Is this possible? If yes please tell me how?
I know how to do it for DDPG in Rllib, but the problem with PPO is that I…

Anas BELFADIL
- 106
- 9
0
votes
1 answer
Errors when trying to use DQN algorithm for FrozenLake Openai game
I am trying to make a very simple DQN algorithm work with the FrozenLake-v0 game but I am getting errors. I understand that it could be an overkill using DQN instead of a Q-table, but I nonetheless would like it to work. Here is the code:
import…

mikanim
- 409
- 7
- 21
0
votes
1 answer
Decreasing action sampling frequency for one agent in a multi-agent environment
I'm using rllib for the first time, and trying to traini a custom multi-agent RL environment, and would like to train a couple of PPO agents on it. The implementation hiccup I need to figure out is how to alter the training for one special agent…

sh0831
- 1
0
votes
1 answer
Is there a way to train a PPOTrainer on one environment, then finish training it on a slightly modified environment?
I'm attempting to first train a PPOTrainer for 250 iterations on a simple environment, and then finish training it on a modified environment. (The only difference between the environments would be a change in one of the environment configuration…

sbrand
- 11
- 1
0
votes
1 answer
Understanding tensorboard plots for PPO in RLLIB
I am beginner in Deep RL and would like to train my own gym environment in RLLIB with the PPO algorithm. However, I am having some difficulties seeing if my hyperparameter settings are being successful. Apart from the obvious episode_reward_mean…

Carlz
- 1
- 2
0
votes
1 answer
Flow-Project tutorial 04 visualizer_rllib.py error
I am new to flow and working through the examples. In tutorial 04 visualize example I get an attribute error. The code in the cell is
!python ../flow/visualize/visualizer_rllib.py data/trained_ring 200 --horizon 2000
and the error I get is:
File…

jaykobbiejnr
- 41
- 6
-1
votes
1 answer
SyntaxError when runing "python examples/train.py singleagent_ring"
When I run python examples/train.py singleagent_ring
I find the following error:
file "examples/train.py", line 201
**config
^
SyntaxError: invalid syntax
Please any help?

salaheddine
- 1
- 1