2

I want to calculate the loss function of my keras model based on dice_coef and I found this expression on the internet:

smooth = 1.

def dice_coef(y_true, y_pred):
    y_true_f = K.flatten(y_true)
    y_pred_f = K.flatten(y_pred)
    intersection = K.sum(y_true_f * y_pred_f)
    return (2. * intersection + smooth) / (K.sum(y_true_f) + K.sum(y_pred_f) + smooth)

I can not understand why do we integrate this smooth variable on botn numerator and denominator ?

baddy
  • 369
  • 1
  • 3
  • 23

1 Answers1

1

Basically you use the smooth to avoid division by 0.

If for some reason both the gt and the pred are 0's you will have an Inf and the weight propagation will ruin the training procedure.

Note that you adds it to both the numerator and denominator to prevent it from affecting the coefficient in any other case.

David
  • 8,113
  • 2
  • 17
  • 36