Following the SimpleCorridor example I can create my own env and train a model! nice. But when I try to evaluate this trained model, rllib does not recognize my custom env.
How can I evaluate a trained model on a custom environment?
When I use rllib rollout ...
like its suggested here it does not recognize my env because it's custom one. I was hoping to have a function like run_experiments
but like evaluate_experiment
so I can call it on my project inside one of my files.
Thats the issue. If you want to see my custon_env is this one
Right now I'm having to copy my environment and paste it inside the gym/envs/
package directory and register it in the __init__.py
file.
Would be good to see another way to do this
Thanks