Questions tagged [dqn]

DQN is a multi-layered neural network, added target network and experience replay to Q-learning

206 questions
1
vote
0 answers

TF-Agents _action_spec: how to define the correct shape for discrete action space?

Scenario 1 My custom environment has the following _action_spec: self._action_spec = array_spec.BoundedArraySpec( shape=(highestIndex+1,), dtype=np.int32, minimum=0, maximum=highestIndex, name='action') Therefore my actions are…
Ling
  • 449
  • 6
  • 21
1
vote
1 answer

Question about the reinforcement learning action, observation space size

I tried to custom environment with a reinforcement learning(RL) project. Some examples such as ping-pong, Aarti, Super-Mario, in this case, action, and observation space really small. But, my project action, observation space is really huge size…
1
vote
0 answers

How can i change the space in CarRacing-v0 from box to discrete?

I wanna train my agent in CarRacing-v0 environment, but instead of box action/observation spaces I want to use discrete spaces so I can train it with DQN algorithm. there is a sayin in openai-gym that says: "Discreet control is reasonable in this…
Amaranth
  • 33
  • 1
  • 4
1
vote
1 answer

Training DQN Agent with Multidiscrete action space in gym

I would like to train a DQN Agent with Keras-rl. My environment has both multi-discrete action and observation spaces. I am adapting the code of this video: https://www.youtube.com/watch?v=bD6V3rcr_54&t=5s Then, I am sharing my code class…
mercury24
  • 53
  • 1
  • 9
1
vote
1 answer

DQN predicts same action value for every state (cart pole)

I'm trying to implement a DQN. As a warm up I want to solve CartPole-v0 with a MLP consisting of two hidden layers along with input and output layers. The input is a 4 element array [cart position, cart velocity, pole angle, pole angular velocity]…
1
vote
0 answers

DQN is not training well

import tensorflow as tf import keras import numpy as np import gym import random from keras.layers import * model = keras.models.Sequential() model.add(Dense(12,activation = 'tanh',input_shape = (4,))) model.add(Dense(2,activation = 'linear'))…
1
vote
0 answers

About Using Unity:ML-Agents and DQN Algorithm

I am having difficulty learning by connecting the external API and the unity environment I have created. I was looking at the previous ml-agent version of DQN code and wanted to use the following code. How should I use this in the current version? #…
1
vote
0 answers

Pytorch, DeepQ learning. Too many images in batch, empty tensors

I am trying to adapt this tutorial code : https://pytorch.org/tutorials/intermediate/reinforcement_q_learning.html into different enviroment, however I cannot learn the model because it gives me two different crashes so far: This error usually…
1
vote
1 answer

Expected conv2d_input to have 4 dimensions, but got array with shape (1, 1, 1, 150, 75, 3)?

similar problems to this have been posted and answered on this forum but this particular case I didn't find any solution. ( I'm using Keras ) I have images of the shape (150,75,3) and I reshaped the numpy array to (1,150,75,3) This is supposed to…
zed_eln
  • 13
  • 5
1
vote
1 answer

Can I pass constraints to actions in deep q learning python?

Currently I am using RL agent DQN to predict action and update the action value function. But if I have a constraint to run a particular action for n times, can I have a constraint in DQN specified for the agent to take action? If yes, How can I…
python_interest
  • 874
  • 1
  • 9
  • 27
1
vote
0 answers

CarRacing-v0 have to render when training?

the game windows always jump out on both macos and colab, and throw the error "pyglet.canvas.xlib.NoSuchDisplayException: Cannot connect to "None" " How to close the game windows when training?
steven
  • 25
  • 5
1
vote
1 answer

Dictionary observation space Acme DQN agent

I'm trying to add illegal action masking to my dqn agent using masked_epsilon_greedy. Does anyone know how can I update the policy network to use observation["your_key_for_observation"] rather than 'observation' since the observation space is a…
1
vote
1 answer

using DQN to solve shortest path

I'm trying to find out if DQN can solve the shortest path algorithm so I have this Dataframe which contains a source which has nodes id ,end which represents the destination and also has nodes id, and the weights which represent the distance of the…
noob
  • 672
  • 10
  • 28
1
vote
2 answers

Why is DQNAgent.fit adding extra dimensions to my input data?

I'm using one of Keras's deep q-learning agents: DQNAgent. When I pass my environment into DQNAgent.fit, I receive the following error: **3 dqn.fit(env, nb_steps=50000, visualize=False,…
1
vote
1 answer

Deep Q Learning - Cartpole Environment

I have a concern in understanding the Cartpole code as an example for Deep Q Learning. The DQL Agent part of the code as follow: class DQLAgent: def __init__(self, env): # parameter / hyperparameter self.state_size =…
jasmin
  • 79
  • 1
  • 7