I would like to have a few loss functions, e.g.:
def loss_equation(x, a, b):
"""L2 loss for matrix equation `a=bx`."""
a_test = tf.matmul(x, b)
return tf.reduce_sum(tf.square(a-a_test))
def loss_regular(x):
"""L2 loss regularizing entries of `x`."""
return tf.reduce_sum(tf.square(x))
And to be able to find optimal x
feeding the losses to a custom optimizing function as:
x_optimal = some_optimizer(
{ "loss": loss_equation,
"args": [param_a, param_b]
},
{ "loss": loss_equation,
"args": []
})
The optimizer should find the best x
minimizing the sum of specified losses (e.g. in one experiment I have two losses, each with it's own parameters, in another I have five).
How do I program this modular behaviour in TensorFlow?