0

I am having a confusion of these two terms 'observation_space' and 'state', and I do not see a purpose of even having the 'observation_space' in my code in the first place. I have seen other answers, but I dove deeper into the code of RL algorithms like keras-rl DDPGAgent and I do not even see a mere usage of this 'observation_space'.

The project that I am working on employs a double DQN, and it takes in a state and outputs an action to take based on the highest Q-value output from the model. From this, can someone shed some light on the use of the 'observation_space' in this application of a double DQN? I am trying to create a standardised environment with gym.Env inheritance, and this 'space' is annoying me.

If there is a code source out there that even uses this 'observation_space', please do share it too!

desertnaut
  • 57,590
  • 26
  • 140
  • 166
Zezimabig
  • 37
  • 4
  • @desertnaut May I ask whether there is a better place to place this sort of questions in? – Zezimabig Aug 22 '22 at 02:26
  • Please see the NOTE in https://stackoverflow.com/tags/deep-learning/info ; but before posting to any other SE site, be sure to read their respective on-topic help page. – desertnaut Aug 22 '22 at 20:19

0 Answers0