0

I just recently tried using Keras-RL to train an agent in a tictactoe game I made to practice making custom environments for my final third year project which involves doing this but on a much larger proper game.

At the following step I get an error thrown at me, I've tried googling it but all the answers I found were situationally specific (or maybe I'm just bad at googling):

dqn = build_agent(model, actions)
dqn.compile(Adam(lr=1e-3), metrics=["mae"])
dqn.fit(env, nb_steps=50000, visualize=False, verbose=1)

I'm using the following to build the model:

env = TTTEnv()
states = env.observation_space.shape
actions = env.action_space.n
def build_model(states, actions):
    model = Sequential()
    model.add(Dense(24, activation="relu",input_shape=states))
    model.add(Dense(24, activation="relu"))
    model.add(Dense(actions, activation="linear"))
    return model
def build_agent(model, actions):
    policy = BoltzmannQPolicy()
    memory = SequentialMemory(limit=50000, window_length=1)
    dqn = DQNAgent(model=model, memory=memory, policy=policy,
                    nb_actions=actions, nb_steps_warmup=10, 
                    target_model_update=1e-2)
    return dqn

Here's my custom environment that interacts with my own made TicTacToe game:

class TTTEnv(Env):
def __init__(self):
    self.action_space = Discrete(9)
    # Caused problems with keras-rl so I resorted to flattening it.
    #self.observation_space = np.array([[Discrete(3)]*3,[Discrete(3)]*3,[Discrete(3)]*3])
    self.observation_space = np.array([Discrete(3)]*9)
    self.game = Game()
    self.state = self.game.gameArray.flatten()
    
def step(self, action):
    reward = 0
    done = False
    self.game.printGame()
    position = self.game.inputs[action]
    if self.game.gameArray[position[0],position[1]] != 0:
        reward -= 20
        done = True
    else:
        self.game.gameArray[position[0],position[1]] = 1
        gameOver, winner = self.game.checkWinGYM()
        if winner == "win":
            reward += 50
            done = gameOver
        elif winner == "draw":
            reward += 10
        elif winner == "ingame":
            self.game.handleBotTurn()
            gameOver, winner = self.game.checkWinGYM()
            if winner == "loss":
                done = gameOver
                reward -= 50
            elif winner == "draw":
                done = gameOver
                reward += 10
    info = {}
    return self.game.gameArray.flatten(), reward, done, info
    
def render(self):
    pass

def reset(self):
    self.state = np.array([[0,0,0],[0,0,0],[0,0,0]])
    self.game.resetGameArray()
    return self.state

I understand my code is not the cleanest, so forgive me. I'm just trying to throw something together quickly to get moving to my real target; my final project. If you'd like anymore code please let me know and I'll throw it in.

Thank you!

Edit: Added error:

"ValueError: Error when checking input: expected dense_9_input to have 2 dimensions, but got array with shape (1, 1, 3, 3)"

0 Answers0