I'm trying implement DDPG in Tensorflow. The action space is continuous with upper bound P_max
and lower bound P_min
. Based on this paper, the inverting gradients is a good approach for continuous action space. However, I get stucked when update the actor network. I'll go through my code in the following.
First, I build placeholder for state, next_state, and reward. Where S_DIM
is the state dimension.
self.S = tf.placeholder(tf.float32, [None, S_DIM], name='state')
self.S_ = tf.placeholder(tf.float32, [None, S_DIM], name='next_state')
self.R = tf.placeholder(tf.float32, [None, 1], name='reward')
Build neural network for actor and critic, where A_DIM
is the action space:
def build_a(self, s, scope, trainable):
with tf.variable_scope('actor'):
with tf.variable_scope(scope):
l1 = tf.layers.dense(s, 100, tf.nn.relu,trainable=trainable)
a = tf.layers.dense(l1, A_DIM, trainable=trainable)
return a
def build_c(self, s, a, scope, trainable):
with tf.variable_scope('critic'):
with tf.variable_scope(scope):
concat_layer = tf.concat([s, a], axis=1)
l1 = tf.layers.dense(concat_layer, 100, tf.nn.relu, trainable=trainable)
q = tf.layers.dense(l1, 1, trainable=trainable)
return q
self.a = self.build_a(self.S, scope='evaluation', trainable=True)
self.a_ = self.build_a(self.S_, scope='target', trainable=False)
self.q = self.build_c(self.S, self.a, scope='evaluation', trainable=True)
self.q_ = self.build_c(self.S_, a_, scope='target', trainable=False)
Access parameters in neural network for later use:
self.ae_params = tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES, scope='actor/evaluation')
self.at_params = tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES, scope='actor/target')
self.ce_params = tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES, scope='critic/evaluation')
self.ct_params = tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES, scope='critic/target')
Then, update critic with time-difference Bellman equation by minimizing the difference between q_target
and q
. GAMMA
is the discount factor, for example: 0.99
q_target = self.R + GAMMA * q_
self.c_loss = tf.losses.mean_squared_error(q_target, self.q_)
self.ctrain = tf.train.AdamOptimizer(0.001).minimize(self.c_loss, var_list=self.ce_params)
Finally, update actor (where I get stuck):
dq_da = tf.gradients(q, self.a)[0] # partial Q, partial a
upper_method = lambda: dq_da * (upper - self.a) / (upper - lower)
lower_method = lambda: dq_da * (self.a - lower) / (upper - lower)
# if gradient suggests increasing action, apply upper method
# else, lower method
adjust_dq_da = tf.cond(tf.greater(dq_da, 0), upper_method, lower_method)
grad = tf.gradients(self.a, self.ae_params, grad_ys=adjust_dq_da)
# apply gradient to the parameters in actor network
self.atrain = tf.train.AdamOptimizer(-0.0001).apply_gradients(zip(grad, self.ae_params))
And I got error:
ValueError: Shape must be rank 0 but is rank 2 for 'actor_gradient/cond/Switch' (op: 'Switch') with input shapes: [?,1], [?,1].
Is there any way to improve this?