I'm doing reinforcement learning, and I'm having trouble with performance.
Situation, no custom code:
I loaded a Google Deep Learning VM (https://console.cloud.google.com/marketplace/details/click-to-deploy-images/deeplearning) on Google Cloud.…
import numpy as np
import gym
from gym import wrappers # 追加
from keras.models import Sequential
from keras.layers import Dense, Activation, Flatten
from keras.optimizers import Adam
from rl.agents.dqn import DQNAgent
from rl.policy import…
I am using Keras backend function to compute the gradient in reinforcement learning set up and following is the snippet of code. For this code, I am getting an error which is below as well. What could be the reason for it?
1 X =…
I get the following error when trying to use tensorflow (version - newest as of date of posting) on Hardware - MacBookPro CPU with OS - Dual Boot Ubuntu 16.04 LTS
in a virtualenv —no-site-packages with Keras and keras-rl and python 2.7.
...
Using…
I have found the keras-rl/examples/cem_cartpole.py example and I would like to understand, but I don't find documentation.
What does the line
memory = EpisodeParameterMemory(limit=1000, window_length=1)
do? What is the limit and what is the…
When I want to execute Tensorboard with keras-rl (DQNAgent):
tb_callback = TensorBoard('/home/jose/TED/MLU_minimization/logs', update_freq=1)
dqn.fit(env, nb_steps=5000000, visualize=False, verbose=1, nb_max_episode_steps=None, log_interval=10000,…
This is the minimal example to reproduce the problem:
from keras.models import Sequential
from keras.layers import Dense, Flatten, LeakyReLU
from keras.regularizers import l1
from rl.agents.dqn import DQNAgent
reg = l1(1e-5)
relu_alpha =…
I'm working on a DQN model that trains on a CustomEnv from OpenAi Gymnasium. My observation space has just one dimension, with shape (8,) and that's going to be the input of my neural network. I first used a model with full dense layers like so:
def…
I'm working on a project where I want to train an agent to find optimal routes in a road network (Graph). I build the custom Env with OpenAI Gym, and I'm building the model and training the agent with Keras and Keras-rl respectively.
The problem is…
On the last day, I'm trying to deal with an error I get in the DQNAGENT fit function.
I get the following error:
ValueError: Error when checking input: expected dense_input to have 2 dimensions, but got array with shape (1, 3, 4)
in dqnagent.fit…
Just as the title says, I keep running into an error when following a tutorial to make a reinforcement learning agent using keras RL. The code of which is below:
import gym
import random
import numpy as np
from tensorflow.keras.models import…
I'm trying to train an agent using TensorFlow and Keras-rl2 to be able to play a gym environment called CartPole-v1 and I'm using google colaboratory
this's my implementation:
!pip install gym[classic_control]
!pip install keras-rl2
import…
I am trying to use keras-rl2 DQNAgent to solve the taxi problem in open AI Gym.
For a quick refresh, please find it in Gym-Documentation, thank you!
https://www.gymlibrary.dev/environments/toy_text/taxi/
Here are my process:
0.Open the Taxi-v3…
I have an RL problem where I want the agent to make a selection of x out of an array of size n.
I.e. if I have [0, 1, 2, 3, 4, 5] then n = 6 and if x = 3 a valid action could be
[2, 3, 5].
Right now what I tried is have n scores:
Output n continuous…
I made an env with Gym for Sudoku puzzle and I want to train an AI on it using KerasRL (I've removed the step reset and render method of the environment to not have too much code for StackOverflow).
I use a Flatten and 3 dense layers for my model…