1

I'm trying to experiment with using tf_agents' PPOAgent in the CartPole-v1 environment, but I am recieving the following error upon declaring the agent itself:

ValueError: actor_network output spec does not match action spec:
TensorSpec(shape=(2,), dtype=tf.float32, name=None)
vs.
BoundedTensorSpec(shape=(), dtype=tf.int64, name='action', minimum=array(0, dtype=int64), maximum=array(1, dtype=int64))

I believe the issue is that the output of my network is tf.float32 not tf.int64, but I could be wrong. I don't know how to make the network output an integer though, and as I understand it that's just not possible or desired.

If I run a continuous environment like MountainCarContinuous-v0 I get a different error:

ValueError: Unexpected output from `actor_network`.  Expected `Distribution` objects, but saw output spec: TensorSpec(shape=(1,), dtype=tf.float32, name=None)

Here's the relevant code (mostly taken from the DQN tutorial):

# env_name = 'MountainCarContinuous-v0'
env_name = 'CartPole-v1'
train_py_env = suite_gym.load(env_name)
eval_py_env = suite_gym.load(env_name)

train_env = tf_py_environment.TFPyEnvironment(train_py_env)
eval_env = tf_py_environment.TFPyEnvironment(eval_py_env)

train_env.reset()
eval_env.reset()

actor_layer_params = (100, 50)
critic_layer_params = (100, 50)
action_tensor_spec = tensor_spec.from_spec(train_env.action_spec())
num_actions = action_tensor_spec.maximum - action_tensor_spec.minimum + 1

# Define a helper function to create Dense layers configured with the right
# activation and kernel initializer.
def dense_layer(num_units):
  return tf.keras.layers.Dense(
      num_units,
      activation=tf.keras.activations.relu,
      kernel_initializer=tf.keras.initializers.VarianceScaling(
          scale=2.0, mode='fan_in', distribution='truncated_normal'))

#Actor network
dense_layers = [dense_layer(num_units) for num_units in actor_layer_params]
actions_layer = tf.keras.layers.Dense(
    1,
    name='actions',
    activation=None,
    kernel_initializer=tf.keras.initializers.RandomUniform(
        minval=-0.03, maxval=0.03),
    bias_initializer=tf.keras.initializers.Constant(-0.2))

ActorNet = sequential.Sequential(dense_layers + [actions_layer])

#Critic/value network
dense_layers = [dense_layer(num_units) for num_units in critic_layer_params]
criticism_layer = tf.keras.layers.Dense(
    1,
    activation=None,
    kernel_initializer=tf.keras.initializers.RandomUniform(
        minval=-0.03, maxval=0.03),
    bias_initializer=tf.keras.initializers.Constant(-0.2))
CriticNet = sequential.Sequential(dense_layers + [criticism_layer])

optimizer = tf.keras.optimizers.Adam(learning_rate=learning_rate)

train_step_counter = tf.Variable(0)


#Error occurs here
agent = tf_agents.agents.PPOAgent(
    train_env.time_step_spec(),
    train_env.action_spec(),
    optimizer=optimizer,
    actor_net=ActorNet,
    value_net=CriticNet,
    train_step_counter=train_step_counter)

I feel like I must be missing something obvious, or have a fundamental misunderstanding, any and all help would be appreciated. I couldn't find an example of a PPOAgent in use.

Old_Frog
  • 11
  • 3

1 Answers1

0

Figured it out, I needed to use a network which returns a distribution such as an ActorDistributionNetwork

Details here: https://www.tensorflow.org/agents/api_docs/python/tf_agents/networks/actor_distribution_network/ActorDistributionNetwork

Old_Frog
  • 11
  • 3