1

I would like to have a few loss functions, e.g.:

    def loss_equation(x, a, b):
        """L2 loss for matrix equation `a=bx`."""
        a_test = tf.matmul(x, b)
        return tf.reduce_sum(tf.square(a-a_test))

    def loss_regular(x):
        """L2 loss regularizing entries of `x`."""
        return tf.reduce_sum(tf.square(x))

And to be able to find optimal x feeding the losses to a custom optimizing function as:

    x_optimal = some_optimizer( 
        { "loss": loss_equation,
          "args": [param_a, param_b]
        },
        { "loss": loss_equation,
          "args": []
        })

The optimizer should find the best x minimizing the sum of specified losses (e.g. in one experiment I have two losses, each with it's own parameters, in another I have five). How do I program this modular behaviour in TensorFlow?

benjaminplanche
  • 14,689
  • 5
  • 57
  • 69
Pawel
  • 169
  • 7

1 Answers1

1
x = ...

def loss_overall(x):
    return loss_equation(x, param_a, param_b) + loss_regular(x)
loss = loss_overall(x)
opt = tf.train.AdamOptimizer(1e-3)
with tf.control_dependencies(tf.get_collection(tf.GraphKeys.UPDATE_OPS)):
    train_op = opt.minimize(loss)
with tf.Session() as sess:
    while True:
        sess.run(train_op)
    ...

Firstly you should create loss tensor
Secondly, you should define optimizer
Thirdly invoke minimize method to get the train operation