1

I am trying to optimize a loss function (defined using evidence lower bound) with tf.train.AdamOptimizer.minimize() on Tensorflow version 1.15.2 with eager execution enabled. I tried the following:

learning_rate = 0.01
optim = tf.train.AdamOptimizer(learning_rate=learning_rate)
train_op = optim.minimize(loss)

and got the following : RuntimeError: "loss" passed to Optimizer.compute_gradients should be a function when eager execution is enabled.

This works fine if I disable eager execution but since I need to save a tensorflow variable as a numpy array so I need eager execution enabled. The documentation mentions that when eager execution is enabled, the loss must be a callable. So the loss function should be defined in a way that it takes no inputs but gives out loss. I am not exactly sure how do I achieve such a thing.

I tried train_op = optim.minimize(lambda: loss) but got ValueError: No gradients provided for any variable, check your graph for ops that do not support gradients, between variables [] and loss <function <lambda> at 0x7f3c67a93b00>

Laplace Ricky
  • 1,540
  • 8
  • 7
The Doctor
  • 332
  • 2
  • 5
  • 16
  • Does this answer your question? [\`loss\` passed to Optimizer.compute\_gradients should be a function when eager execution is enabled](https://stackoverflow.com/questions/57858219/loss-passed-to-optimizer-compute-gradients-should-be-a-function-when-eager-exe) – Xiddoc Jun 12 '21 at 09:56
  • @Xiddoc Sadly, no! As I mentioned I am using Tensorflow 1.15.2 so I do not need to disable TF v2 behaviour – The Doctor Jun 12 '21 at 10:09
  • were you able to solve the issue? – Dirk V Jan 03 '23 at 02:36

0 Answers0