I am new to RLlib and trying to write a small program that takes a configuration file and trains an agent. The configuration file is a fine-tuned example for CartPole-v1
environment, and I saved it in cartpole-ppo.yaml
.
I am aware of RLlib CLI using Python API, but I want to write a Python script that takes the configuration file as an input and trains an agent. I tried multiple ways to do this, but I couldn't get it to work.
The configuration file is fine-tuned example
cartpole-ppo:
env: CartPole-v1
run: PPO
stop:
episode_reward_mean: 150
timesteps_total: 100000
config:
# Works for both torch and tf.
framework: torch
gamma: 0.99
lr: 0.0003
num_workers: 1
observation_filter: MeanStdFilter
num_sgd_iter: 6
vf_loss_coeff: 0.01
model:
fcnet_hiddens: [32]
fcnet_activation: linear
vf_share_layers: true
enable_connectors: true
I saved it in cartpole-ppo.yaml
and now trying to write a main.py
that takes this configuration file and runs as expected.
import ray
import yaml
import gymnasium as gym
.......................
def train(config_file):
with open(config_file, "r") as f:
config = yaml.safe_load(f)
.................
return analysis
if __name__ == "__main__":
ray.init()
config_file = 'cartpole-ppo.yaml'
train(config_file)
I want to fill the gaps, I tried many ways, but futile. Please suggest a way to achieve this.