5

Is there any way in Keras to specify a loss function which does not need to be passed target data?

I attempted to specify a loss function which omitted the y_true parameter like so:

def custom_loss(y_pred):

But I got the following error:

Traceback (most recent call last):
  File "siamese.py", line 234, in <module>
    model.compile(loss=custom_loss,optimizer=Adam(lr=0.001, beta_1=0.9, beta_2=0.999, epsilon=1e-08, decay=0.0))
  File "/usr/local/lib/python2.7/dist-packages/keras/engine/training.py", line 911, in compile
    sample_weight, mask)
  File "/usr/local/lib/python2.7/dist-packages/keras/engine/training.py", line 436, in weighted
    score_array = fn(y_true, y_pred)
TypeError: custom_loss() takes exactly 1 argument (2 given)

I then tried to call fit() without specifying any target data:

 model.fit(x=[x_train,x_train_warped, affines], batch_size = bs, epochs=1)

But it looks like not passing any target data causes an error:

Traceback (most recent call last):
  File "siamese.py", line 264, in <module>
    model.fit(x=[x_train,x_train_warped, affines], batch_size = bs, epochs=1)
  File "/usr/local/lib/python2.7/dist-packages/keras/engine/training.py", line 1435, in fit
    batch_size=batch_size)
  File "/usr/local/lib/python2.7/dist-packages/keras/engine/training.py", line 1322, in _standardize_user_data
    in zip(y, sample_weights, class_weights, self._feed_sample_weight_modes)]
  File "/usr/local/lib/python2.7/dist-packages/keras/engine/training.py", line 577, in _standardize_weights
    return np.ones((y.shape[0],), dtype=K.floatx())
AttributeError: 'NoneType' object has no attribute 'shape'

I could manually create dummy data in the same shape as my neural net's output but this seems extremely messy. Is there a simple way to specify an unsupervised loss function in Keras that I am missing?

Nick Bishop
  • 391
  • 1
  • 4
  • 13
  • 1
    I think you are missing the point, what would your unsupervised loss do exactly? What exact computation? – Dr. Snoopy Jun 26 '17 at 13:47
  • I am trying to compare the similarity of two different outputs from the neural net. The more similar they are the lower the loss should be. To be more specific, I am attempting to re-implement the neural network described in this [paper](https://arxiv.org/abs/1705.02193) – Nick Bishop Jun 26 '17 at 13:53
  • I think you should use the dummy data.... yes...it's ugly and I don't like it either... but I can't see a solution. – Daniel Möller Jun 26 '17 at 16:03
  • the second error related to your input/output data, you need to use `numpy.array`. You can use `x_train` as a target. – Mirodil Jan 19 '18 at 13:58

2 Answers2

2

Write your loss function as if it had two arguments:

  1. y_true
  2. y_pred

If you don't have y_true, that's fine, you don't need to use it inside to compute the loss, but leave a placeholder in your function prototype, so keras wouldn't complain.

def custom_loss(y_true, y_pred):
    # do things with y_pred
    return loss

Adding custom arguments

You may also need to use another parameter like margin inside your loss function, even then your custom function should only take in those two arguments. But there is a workaround, use lambda functions

def custom_loss(y_pred, margin):
    # do things with y_pred
    return loss

but use it like

model.compile(loss=lambda y_true, y_pred: custom_loss(y_pred, margin), ...)
Saravanabalagi Ramachandran
  • 8,551
  • 11
  • 53
  • 102
2

I think the best solution is customizing the training instead of using the model.fit method.

The complete walkthrough is published in the Tensorflow tutorials page.

Celso França
  • 653
  • 8
  • 31