DQN is a multi-layered neural network, added target network and experience replay to Q-learning
Questions tagged [dqn]
206 questions
0
votes
1 answer
How to take two arrays as output from Gym.Env to fit to DQN NN
Can't figure out how to make the gym.Env put out two separate arrays. It just seems to combine them into 1 array containing 2 arrays. But fitting to DQN NN expects two arrays.
I'm hoping to put the two arrays into the NN separately.
I've tried to…

Cam Worrall
- 1
- 2
0
votes
0 answers
Predicting scalar value using Embbeding layer
I followed the following tutorial to implement the taxi domain and DQN.
However, when predicting values for a batch, all inputs get the same value.
Assume that the input for the embedding layer has the form [float] in the interval [0, 1] and can…

HenDoNR
- 79
- 1
- 12
0
votes
0 answers
Tensorflow DqnAgent policy vs collect_policy
Since there is no explanation in the TF API doc on what collect_policy really is, I looked into the source code: https://github.com/tensorflow/agents/blob/master/tf_agents/agents/dqn/dqn_agent.py
Can you describe policy and collect policy as…

Boppity Bop
- 9,613
- 13
- 72
- 151
0
votes
0 answers
Reinforcement Learning with DQN approach on stock prices
I have programmed a reinforcement model with a DQN approach that is supposed to make purchase decisions based on stock prices.
For the training I use two stock prices. One has an upward trend and one has a downward trend. The time period for both is…

masterkey
- 65
- 4
0
votes
0 answers
Proper object type to pass in as time_step_spec to tf-agent?
I'm trying to pass in a BoundedArraySpec as time_step_spec
tf_agents.agents.DqnAgent(
time_step_spec = tf_agents.specs.BoundedArraySpec(...
but ultimately, I get this error
AttributeError: 'BoundedArraySpec' object has no attribute…

tgm_learn
- 61
- 7
0
votes
1 answer
tf_agents dqn fails to initialize
Even though tf.agents initialize() require no input variables, this line
agent.initialize()
produces this error
TypeError: initialize() missing 1 required positional argument: 'self'
Ive tried agent.initialize(agent) because it apparently wanted…

tgm_learn
- 61
- 7
0
votes
0 answers
What is the purpose of the observation_space in OpenAI Gym if I am going to input the state of the environment into my DQN for training
I am having a confusion of these two terms 'observation_space' and 'state', and I do not see a purpose of even having the 'observation_space' in my code in the first place. I have seen other answers, but I dove deeper into the code of RL algorithms…

Zezimabig
- 37
- 4
0
votes
1 answer
Double DQN performs significantly worse than vanilla DQN
I have an agent that has to explore a customized environment.
The environment is a grid (100 squares horizontally, 100 squares vertically, each square is 10 meters wide).
In the environment, there are a number of users (called ues) whose positions…

Ness
- 158
- 1
- 12
0
votes
1 answer
Deep Reinforcement Learning, how to make an agent that control many machines
Good morning, Im facing a "RL" problem, which have many constraints, the main idea is that my agent will control many different machines with for example ordering them to go out for doing their missions (we don't give importance for the mission), or…

koussix
- 1
- 2
0
votes
0 answers
Convolutional layer error using tf.agents with GPU activated
I running DQN training tutorial with tf.agents(https://www.tensorflow.org/agents/tutorials/1_dqn_tutorial), and I am trying to change the model they use with just dense layers to have some convolutional ones on top
When I run this on colab without…

José Luis Neves
- 11
- 2
0
votes
1 answer
Keras-rl ValueError"Model has more than one output. DQN expects a model that has a single output"
Is there any way to get around this error? I have a model with a 15x15 input grid, which leads to two outputs. Each output has 15 possible values, which are x or y coordinates. I did this because it is significantly simpler than having 225 separate…

Mercury
- 298
- 1
- 11
0
votes
1 answer
how can i make target size equals input size in my DQN code?
everyone!When I was doing dqn programming, I encountered some problems. This error says
“ Userwarning: Using a target size (torch.Size([32,32])) that is different to the input size (torch.Size([32,1])).This will likely lead to incorrect results due…

Fo Oc
- 33
- 5
0
votes
1 answer
ValueError: Error when checking input: expected Input_input to have 4 dimensions, but got array with shape (1, 1, 2)
I am trying to create a Flappy Bird AI with Convolutional Layers and Dense Layers, but at the "Train" step (Function fit()) I get the following error message:
dqn.fit(env, nb_steps=500000, visualize=False, verbose=2)
Training for 500000 steps…

chana33
- 1
- 1
0
votes
0 answers
DQN model (Game: Atari PongNoFrameskip) does not learn
I'm trying to implement a DQN model of Pong game. However, it still performs like random activities even after about 1000 episodes. The CNN training seems not improve the agents.
Here is my main code:
I create a CNN including three convolution…

speedhawk1
- 27
- 5
0
votes
1 answer
The DQN model cannot correctly come out the expected scores
I am working on a DQN training model of the game "CartPole-v1". In this model, the system did not remind any error information in the terminal. However, The result evaluation got worse.This is the output data:
episode: 85 score: 18 avarage score:…

speedhawk1
- 27
- 5