10

Given a TensorFlow tf.while_loop, how can I calculate the gradient of x_out with respect to all weights of the network for each time step?

network_input = tf.placeholder(tf.float32, [None])
steps = tf.constant(0.0)

weight_0 = tf.Variable(1.0)
layer_1 = network_input * weight_0

def condition(steps, x):
    return steps <= 5

def loop(steps, x_in):
    weight_1 = tf.Variable(1.0)
    x_out = x_in * weight_1
    steps += 1
    return [steps, x_out]

_, x_final = tf.while_loop(
    condition,
    loop,
    [steps, layer_1]
)

Some notes

  1. In my network the condition is dynamic. Different runs are going to run the while loop a different amount of times.
  2. Calling tf.gradients(x, tf.trainable_variables()) crashes with AttributeError: 'WhileContext' object has no attribute 'pred'. It seems like the only possibility to use tf.gradients within the loop is to calculate the gradient with respect to weight_1 and the current value of x_in / time step only without backpropagating through time.
  3. In each time step, the network is going to output a probability distribution over actions. The gradients are then needed for a policy gradient implementation.
Genius
  • 569
  • 1
  • 3
  • 23
  • Are you sure you are interested in `x_out` and not `x_final`? – ben Apr 01 '18 at 09:32
  • 1
    Yes, the network is a self-enrolling model like [image captioning](https://cs.stanford.edu/people/karpathy/cvpr2015.pdf). The network outputs a probability distribution over actions in each time step, until it decides to be "done". I need the gradient of each of the outputs (actions) and not only the last one. – Genius Apr 01 '18 at 15:10
  • Are you trying to create a new variable on each `tf.while_loop` iteration? That cannot be done with TensorFlow. With your current code you are creating only two variables, one used for `layer_1` and another one used on every loop iteration. – jdehesa Apr 05 '18 at 13:30
  • No, I don't want to create new variables in every iteration. I simply want to backpropagate through time: Compute the gradient of `x_out` with respect to `weight_0` and `weight_1` for every time step. – Genius Apr 06 '18 at 09:38
  • So why are you declaring `weight_1 = tf.Variable(1.0)` inside the loop? Were your intention to actually `tf.get_variable`? – ldavid Apr 06 '18 at 15:28
  • I think it shouldn't make a difference, since "while_loop calls `cond` and `body` exactly once" ([copied from the TensorFlow API](https://www.tensorflow.org/api_docs/python/tf/while_loop)) – Genius Apr 06 '18 at 15:42

1 Answers1

6

You can't ever call tf.gradients inside tf.while_loop in Tensorflow based on this and this, I found this out the hard way when I was trying to create conjugate gradient descent entirely into the Tensorflow graph.

But if I understand your model correctly, you could make your own version of an RNNCell and wrap it in a tf.dynamic_rnn, but the actual cell implementation will be a little complex since you need to evaluate a condition dynamically at runtime.

For starters, you can take a look at Tensorflow's dynamic_rnn code here.

Alternatively, dynamic graphs have never been Tensorflow's strong suite, so consider using other frameworks like PyTorch or you can try out eager_execution and see if that helps.

squadrick
  • 770
  • 3
  • 21
  • 1
    Have you found a workaround for Conjugate Gradient entirely in TF (and avoiding eager execution)? – niko Jan 02 '19 at 00:11
  • 1
    I have, yes. It's non-trivial. Check out [this repository](https://github.com/tensorforce/tensorforce). They have a pretty robust implementation. – squadrick Jan 03 '19 at 12:06