Is there a way in keras
or tensorflow
to give samples an extra weight if they are incorrectly classified only. Ie. a combination of class weight and sample weight but only apply the sample weight for one of the outcomes in a binary class?
Asked
Active
Viewed 924 times
3

Marcin Możejko
- 39,542
- 10
- 109
- 120

Nickpick
- 6,163
- 16
- 65
- 116
-
Do you want these weights to be computed at each epoch separately or to be fixed for whole training process? – Marcin Możejko Feb 10 '18 at 11:32
-
Whatever works. As I understand they are normally updated after each batch? – Nickpick Feb 10 '18 at 11:33
-
It is possible. Provide us with the loss you use and the way you want to compute weights - so we could provide you with implementation. – Marcin Możejko Feb 10 '18 at 11:33
-
I’m just using the binary cross entropy function of keras. – Nickpick Feb 10 '18 at 11:34
1 Answers
4
Yes, it's possible. Below you may find an example of how to add additional weight on true positives , false positives , true negatives, etc:
def reweight(y_true, y_pred, tp_weight=0.2, tn_weight=0.2, fp_weight=1.2, fn_weight=1.2):
# Get predictions
y_pred_classes = K.greater_equal(y_pred, 0.5)
y_pred_classes_float = K.cast(y_pred_classes, K.floatx())
# Get misclassified examples
wrongly_classified = K.not_equal(y_true, y_pred_classes_float)
wrongly_classified_float = K.cast(wrongly_classified, K.floatx())
# Get correctly classified examples
correctly_classified = K.equal(y_true, y_pred_classes_float)
correctly_classified_float = K.cast(wrongly_classified, K.floatx())
# Get tp, fp, tn, fn
tp = correctly_classified_float * y_true
tn = correctly_classified_float * (1 - y_true)
fp = wrongly_classified_float * y_true
fn = wrongly_classified_float * (1 - y_true)
# Get weights
weight_tensor = tp_weight * tp + fp_weight * fp + tn_weight * tn + fn_weight * fn
loss = K.binary_crossentropy(y_true, y_pred)
weighted_loss = loss * weight_tensor
return weighted_loss

Marcin Możejko
- 39,542
- 10
- 109
- 120
-
very interesting, but how can I pass in the vector with the individual weights that need to be applied as punishment for each sample if incorrectly classified? – Nickpick Feb 10 '18 at 12:13
-
But this function computes indices of misclassified examples - so you don't need to pass one. Do you want to manually choose examples you want to punish more? – Marcin Możejko Feb 10 '18 at 12:14
-
No, just the ones that are incorrectly classified need to be punished more. I.e. there is a payoff that is either +1 (correctly classified), or it is -x (incorrectly classified). I described the problem slightly differently here: https://stats.stackexchange.com/questions/327763/custom-loss-function-with-weights-and-asymmetric-payoff. I assume that the weight vector would need to be passed into the loss function. – Nickpick Feb 10 '18 at 12:16
-
Is it correct that tn = .. *(1-y_true). Shouln'd it rather be ... * K.equal(y_true, False)? Similar to how I describe it here: https://stackoverflow.com/questions/48744092/weighted-loss-function-for-payoff-maximization – Nickpick Feb 12 '18 at 15:23
-
This is fine. Target is `float32` so one could use this heuristic. Btw. False is equal to 0. – Marcin Możejko Feb 12 '18 at 15:29
-
I'm not worried about the data type, but your suggestion of 1-y_true is not equal to y_true==False (which i think is what we should use in the term to define true negatives). Would you not agree? – Nickpick Feb 12 '18 at 15:31
-
-
This answer is most likely incorrect as described here: https://stackoverflow.com/questions/48771961/weighting-true-positives-vs-true-negatives/48833827#48833827 – Nickpick Feb 17 '18 at 15:17