I am trying to optimize a loss function (defined using evidence lower bound) with tf.train.AdamOptimizer.minimize()
on Tensorflow version 1.15.2
with eager execution enabled. I tried the following:
learning_rate = 0.01
optim = tf.train.AdamOptimizer(learning_rate=learning_rate)
train_op = optim.minimize(loss)
and got the following : RuntimeError: "loss" passed to Optimizer.compute_gradients should be a function when eager execution is enabled.
This works fine if I disable eager execution but since I need to save a tensorflow variable as a numpy
array so I need eager execution enabled. The documentation mentions that when eager execution is enabled, the loss must be a callable. So the loss function should be defined in a way that it takes no inputs but gives out loss. I am not exactly sure how do I achieve such a thing.
I tried train_op = optim.minimize(lambda: loss)
but got ValueError: No gradients provided for any variable, check your graph for ops that do not support gradients, between variables [] and loss <function <lambda> at 0x7f3c67a93b00>