1

In all the toturials (including tf official docs) that I see about tfe, The example uses the gradient tape, and manually adding all the gradients to the list of computed gradients e.g

variables = [w1, b1, w2, b2]  <--- manually store all the variables
optimizer = tf.train.AdamOptimizer()
with tf.GradientTape() as tape:
 y_pred = model.predict(x, variables)
 loss = model.compute_loss(y_pred, y)
 grads = tape.gradient(loss, variables) < ---- send them to tape.gradient
 optimizer.apply_gradients(zip(grads, variables))

But is it the only way? even for huge models we need to accumulate all the parameters, or we somehow can access the defaults graph variables list
Trying to access tf.get_default_graph().get_collection(tf.GraphKeys.GLOBAL_VARIABLES) or trainable_variables inside a tfe session gave the empty list.

DsCpp
  • 2,259
  • 3
  • 18
  • 46

1 Answers1

1

To the best of my understanding, Eager mode in TensorFlow stores information about model in objects, for example in tf.keras.Model or tf.estimator.Estimator. In the absence of graph you can get the list of variables only there, using tf.keras.Model.trainable_variables for example. Eager mode, however, can work with graph object created explicitly. In this case, i think it will store list of variables. Without it, keras model object will be the only explicit storage for variables.

Sharky
  • 4,473
  • 2
  • 19
  • 27
  • so just to make sure, if I use multiple `keras models` within my model, and I want to calculate the gradients, my only way is to store the variables myself? That's sound like a non-high level API way to go (thus I assume there is another way) – DsCpp Mar 21 '19 at 11:55
  • You can get them from keras.model. It's if you don't use keras you'll have to store them yourself. I think this is in active development and will be available in TF 2.0 – Sharky Mar 21 '19 at 12:08