DQN is a multi-layered neural network, added target network and experience replay to Q-learning
Questions tagged [dqn]
206 questions
0
votes
0 answers
why DQN performance fluctuates so mcuh?
When I run DQN and check performance of the policy, it often shows a high fluctuation.
Also it is not difficult to find performance pictures like this online.
A graph image showing a lot of fluctuations in performance(1)
(2)
I am quite confused why…

user3315463
- 3
- 2
0
votes
0 answers
Pytorch - modify tensors shapes in forward(), how to handle in backward()?
I'm using a Pytorch DQN for Reinforcement Learning of card games. I use a Convolution layer to detect sequence and suit-related patterns in a 4x13 "image" of the state of the deck of cards. I then flatten the output of this layer, flatten the…

black-ejs
- 3
- 3
0
votes
0 answers
The model loss.grad is not None,but the parameters.grad in model is None when training
I'm traing a adversarial net against DQN, the loss uses the Q value of original obs and attacked obs , when training the net, I find the gard_value of parameter is None.How can I resolve it?
for epoch in range(n_epoch):
for…

li xj
- 1
0
votes
0 answers
Deep Q Learning Calculate q_values but RuntimeError: mat1 and mat2 shapes cannot be multiplied
I am trying to apply DeepQLearning class and DQN class, but there is a problem with calculation.
The agent act something in state, and the state is expressed by int data. For example, 0,1,2,3,...
And, I should calculate the q_values with the state,…

원종진
- 3
- 2
0
votes
0 answers
DQN with LSTM layers in Keras-rl2, understanding input_shape
I'm working on a DQN model that trains on a CustomEnv from OpenAi Gymnasium. My observation space has just one dimension, with shape (8,) and that's going to be the input of my neural network. I first used a model with full dense layers like so:
def…

Aldair CB
- 135
- 1
- 1
- 6
0
votes
0 answers
AttributeError: 'TimeStep' object has no attribute 'time_step_spec'
How to fix that atributeerror for LSTM ?
File "run.py", line 13, in
from TraderEnv import TraderEnv
File "C:\Users\Admin\Desktop\RL-Forex-trader-LSTM-master\TraderEnv.py", line 424, in
lstm_state_eval_policy = LSTMStatePolicy(agent.policy,…
0
votes
1 answer
How to use masking in keras-rl with DQNAgent?
I'm working on a project where I want to train an agent to find optimal routes in a road network (Graph). I build the custom Env with OpenAI Gym, and I'm building the model and training the agent with Keras and Keras-rl respectively.
The problem is…

Aldair CB
- 135
- 1
- 1
- 6
0
votes
1 answer
In a DQN for Q-learning, how should I apply high gamma values during experience replay?
I'm using pyTorch to implement a Q-Learning approach to card game, where the rewards come only at the end of the hand when a score is calculated. I am using experience replay with high gammas (0.5-0.95) to train the network.
My question is about how…

black-ejs
- 3
- 3
0
votes
1 answer
ValueError: too many values to unpack (expected 4) --> dqn.fit() --> env.step()
I am working with th new version of keras-rl2, trying to train my DQN agent. I have trouble with the fit function - https://github.com/tensorneko/keras-rl2/blob/master/rl/core.py . This is the documentation for class Agent (line 147 --> env.step())…
0
votes
1 answer
Training DQN Agent slows down and then at around 50 episodes crashes
I am training a DQN Agent at around 50 episodes the fit in the replay function starts slowing down and starts freezing the PC. After a while PyCharm just crashes. The epochs start going from 10-13ms to seconds and eventually it freezes entirely
This…

willem12
- 3
- 2
0
votes
1 answer
Learning of DQN with noise data
I'm trying some experiments with DQN in a simple navigation task with binary rewards at the end of the episode. DQN is working perfectly well. Now I,m thinking of perturbing the reward, which means 10% of the time the binary reward is inverted. Will…

user19826638
- 31
- 1
- 4
0
votes
1 answer
How to pass a custom type as an observation to DQN agent using PyTorch?
I want to pass a custom state (observation) to my agent, which includes an array of custom-type objects ( of a class I defined called Task), battery level (integer), resources (integer), and channel gain (integer), when I pass the described state it…

user13238656
- 11
- 1
- 2
0
votes
1 answer
"Size mismatch - weight - bias" error when loading Deep Q Network model for evaluation
I am trying to evaluate the performance of a trained DQN model with the Deep Q Network
`
device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu")
class DeepQNetwork(nn.Module):
def __init__(self, lr, n_actions, name,…

user20441815
- 1
- 1
0
votes
0 answers
How to pass more than one input from get_obs function to neural network?
Here is my custom gym env..
class PricePredictor(gym.Env):
def __init__(self):
...
self.action_space = gym.spaces.Discrete(3,start=-1)
self.observation_space = gym.spaces.Dict({
…

Saiteja
- 1
0
votes
0 answers
I have a problem with keras rl2 DQAgent model, it adds another dim to my states for some reason and I get Value error
On the last day, I'm trying to deal with an error I get in the DQNAGENT fit function.
I get the following error:
ValueError: Error when checking input: expected dense_input to have 2 dimensions, but got array with shape (1, 3, 4)
in dqnagent.fit…

kfir
- 1
- 2