I am implementing a code for semantic segmentation using Keras and I wrote my loss function as in the paper "Generalised Dice overlap as a deep learning loss function for highly unbalanced segmentations" (link: https://arxiv.org/abs/1707.03237) to balance each class. My data are organized as (bacth_size, ImDim1, ImDim2, Nclasses). My loss function is:
eps = 1e-3
def dice(y_true, y_pred):
weights = 1./K.sum(y_true, axis=[0,1,2])
weights = weights/K.sum(weights)
num = K.sum(weights*K.sum(y_true*y_pred, axis=[0,1,2]))
den = K.sum(weights*K.sum(y_true+y_pred, axis=[0,1,2]))
return 2.*(num+eps)/(den+eps)
def dice_loss(y_true, y_pred):
return 1-dice(y_true, y_pred)
Doing in this way, that looks correct to me, the loss function returns nan and I do not get why!?