1

I have a problem with the self-implementation of Keras' Binary Cross Entropy function. Before asking this question I read this post and this post. They are very useful, but when I implement the following code from 2nd post

def BinaryCrossEntropy(y_true, y_pred): 
    y_pred = K.clip(y_pred, K.epsilon(), 1 - K.epsilon())
    term_0 = (1 - y_true) * K.log(1 - y_pred + K.epsilon())  
    term_1 = y_true * K.log(y_pred + K.epsilon())
    return -K.mean(term_0 + term_1, axis=0)

on this excellent tutorial, the results are far away from the original implementation.

To clarify more, I have written two loss functions:

def d_loss(real_output, generated_output):
    """The discriminator loss function."""
    bce = tf.keras.losses.BinaryCrossentropy(from_logits=True)
    return bce(tf.ones_like(real_output), real_output) + bce(
        tf.zeros_like(generated_output), generated_output
    )

def new_d_loss(real_output,generated_output): 
    generated_output = K.clip(generated_output, K.epsilon(), 1 - K.epsilon())
    term_0 = (1 - real_output) * K.log(1 - generated_output + K.epsilon())  
    term_1 = real_output * K.log(generated_output + K.epsilon())
    return -K.mean(term_0 + term_1, axis=0)

and

def g_loss(generated_output):
    """The Generator loss function."""
    bce = tf.keras.losses.BinaryCrossentropy(from_logits=True)
    return bce(tf.ones_like(generated_output), generated_output)

def new_g_loss(generated_output): 
    generated_output = K.clip(generated_output, K.epsilon(), 1 - K.epsilon())
    term_0 = 1  * K.log(1 - generated_output + K.epsilon())  
    #term_1 = real_output * K.log(generated_output + K.epsilon())
    return -K.mean(term_0, axis=0)

Original loss functions (d_loss and g_loss) work perfectly, but when I change the original one with new ones (new_d_loss and new_g_loss) results are not the same. Even when I use new_d_loss with original g_loss.

Flavia Giammarino
  • 7,987
  • 11
  • 30
  • 40

0 Answers0