In all the toturials (including tf official docs) that I see about tfe, The example uses the gradient tape, and manually adding all the gradients to the list of computed gradients e.g
variables = [w1, b1, w2, b2] <--- manually store all the variables
optimizer = tf.train.AdamOptimizer()
with tf.GradientTape() as tape:
y_pred = model.predict(x, variables)
loss = model.compute_loss(y_pred, y)
grads = tape.gradient(loss, variables) < ---- send them to tape.gradient
optimizer.apply_gradients(zip(grads, variables))
But is it the only way? even for huge models we need to accumulate all the parameters, or we somehow can access the defaults graph variables list
Trying to access tf.get_default_graph().get_collection(tf.GraphKeys.GLOBAL_VARIABLES)
or trainable_variables
inside a tfe session gave the empty list.