I'm training a Keras model with a custom function, which I have already teste successfully before. Recently, I'm training it with a new dataset and I've got a strange result: The model trains fine but the val_loss
gives nan
.
Here is the loss:
def Loss(y_true,y_pred):
y_pred = relu(y_pred)
z = k.maximum(y_true, y_pred)
y_pred_negativo = Lambda(lambda x: -x)(y_pred)
w = k.abs(add([y_true, y_pred_negativo]))
if k.sum(z) == 0:
error = 0
elif k.sum(y_true) == 0 and k.sum(z) != 0:
error = 100
elif k.sum(y_true) == 0 and k.sum(z) == 0:
error = 0
else:
error = (k.sum(w)/k.sum(z))*100
return error
I have tried many things:
- Looked at the data for NaNs
- Normalization - on and off
- Clipping - on and off
- Dropouts - on and off
Someone told me that it could be a problem with CUDA installation, but I'm not sure.
Any idea about what is the problem or how I can diagnosis it?