1

I'm using LightGBM and I need to realize a loss function that during the training give a penalty when the prediction is lower than the target. In other words I assume that underestimates are much worse than overestimates. I've found this suggestion that do exactly the opposite:

def custom_asymmetric_train(y_true, y_pred):
    residual = (y_true - y_pred).astype("float")
    grad = np.where(residual<0, -2*10.0*residual, -2*residual)
    hess = np.where(residual<0, 2*10.0, 2.0)
    return grad, hess
def custom_asymmetric_valid(y_true, y_pred):
    residual = (y_true - y_pred).astype("float")
    loss = np.where(residual < 0, (residual**2)*10.0, residual**2) 
    return "custom_asymmetric_eval", np.mean(loss), False

https://towardsdatascience.com/custom-loss-functions-for-gradient-boosting-f79c1b40466d)

How can I modify it for my purpose?

desertnaut
  • 57,590
  • 26
  • 140
  • 166
K.D.
  • 11
  • 2

2 Answers2

0

I believe this function is where you want to make a change.

def custom_asymmetric_valid(y_true, y_pred):
     residual = (y_true - y_pred).astype("float")
     loss = np.where(residual < 0, (residual**2)*10.0, residual**2) 
     return "custom_asymmetric_eval", np.mean(loss), False

The line where loss is worked out has a comparison.

loss = np.where(residual < 0, (residual**2)*10.0, residual**2) 

When residual is less that 0, loss is residual^2 * 10 where whne about 0, loss is just redisual^2.

So if we change this less than to a greater than. This will flip the skew.

loss = np.where(residual > 0, (residual**2)*10.0, residual**2) 
CodeCupboard
  • 1,507
  • 3
  • 17
  • 26
  • Thank you for you help, I need to make a change especially in first function (the training one). Unfortunally change less to greater seems doesn't work – K.D. Oct 26 '21 at 13:53
0

I think this would be helpful. Originated from Custom loss function with Keras to penalise more negative prediction

def customLoss(true,pred):
    diff = pred - true

    greater = K.greater(diff,0)
    greater = K.cast(greater, K.floatx()) #0 for lower, 1 for greater
    greater = greater + 1                 #1 for lower, 2 for greater

    #use some kind of loss here, such as mse or mae, or pick one from keras
    #using mse:
    return K.mean(greater*K.square(diff))

model.compile(optimizer = 'adam', loss = customLoss)
tdy
  • 36,675
  • 19
  • 86
  • 83
Jio Du
  • 11
  • 1