1

Assume that I simply copy and paste the test function in core.py. Then I change the new function's name to test2. Now, I have two identical functions test and test2 in core.py .

Then, in one of the DQN examples, say dqn_cartpole.py, I call:

dqn.test2(env, nb_episodes=5, visualize=True)

instead of

dqn.test(env, nb_episodes=5, visualize=True)

in the last line of the following code.

I put the dqn_cartpole.py for the reference:

import numpy as np
import gym

from keras.models import Sequential
from keras.layers import Dense, Activation, Flatten
from keras.optimizers import Adam

from rl.agents.dqn import DQNAgent
from rl.policy import BoltzmannQPolicy
from rl.memory import SequentialMemory


ENV_NAME = 'CartPole-v0'


# Get the environment and extract the number of actions.
env = gym.make(ENV_NAME)
np.random.seed(123)
env.seed(123)
nb_actions = env.action_space.n

# Next, we build a very simple model.
model = Sequential()
model.add(Flatten(input_shape=(1,) + env.observation_space.shape))
model.add(Dense(16))
model.add(Activation('relu'))
model.add(Dense(16))
model.add(Activation('relu'))
model.add(Dense(16))
model.add(Activation('relu'))
model.add(Dense(nb_actions))
model.add(Activation('linear'))
print(model.summary())

# Finally, we configure and compile our agent. You can use every built-in Keras optimizer and
# even the metrics!
memory = SequentialMemory(limit=50000, window_length=1)
policy = BoltzmannQPolicy()
dqn = DQNAgent(model=model, nb_actions=nb_actions, memory=memory, nb_steps_warmup=10,
               target_model_update=1e-2, policy=policy)
dqn.compile(Adam(lr=1e-3), metrics=['mae'])

# Okay, now it's time to learn something! We visualize the training here for show, but this
# slows down training quite a lot. You can always safely abort the training prematurely using
# Ctrl + C.
dqn.fit(env, nb_steps=50000, visualize=True, verbose=2)

# After training is done, we save the final weights.
dqn.save_weights('dqn_{}_weights.h5f'.format(ENV_NAME), overwrite=True)

# Finally, evaluate our algorithm for 5 episodes.
dqn.test(env, nb_episodes=5, visualize=True)

Why am I getting the following error?

AttributeError: 'DQNAgent' object has no attribute 'test2'

Soheil
  • 31
  • 3
  • are you importing keras-rl from the source code in the folder of your project or from a pip installation? Because if you pip installed it and the `rl` folder is not in your root directory of your project it then it doesn't matter if you changed it because python won't read in those changes you would need to reinstall the repo with the changes using pip again – Matthew Barlowe Jun 13 '20 at 04:29
  • Awesome. I imported it from a pip installer. Thank you very much. – Soheil Jun 15 '20 at 13:37

0 Answers0