1

I am using the Julia package ReinforcementLearning.jl. I would like to gain from the fact that DQN does not require enumerate and revising the whole state space. So, my question is how to describe state_space for discrete environments with no need to enumerate states. In other words, let's assume states are represented by an array of N elements and each of these elements can take M possible values, I would like to avoid the enumeration of the N^M potential states and instead of it, have some generative function.

I have implemented DQN by using ReinforcementLearning.jl for environments where actions and states are discrete. To do so, I have enumerated states at the state_space definition. It works quite well, but the enumeration is avoiding me to get the computational advantages of DQN.

hamidie
  • 67
  • 6

0 Answers0