Is there a way to flip the effect of the cross-entropy loss?
I have a language model, and I want to train the model in a way that doesn't generate a specific text. Thus, I have two losses, one that I want to reduce (loss1
) and another that I want to increase (loss2
):
loss1 = outputs['loss1']
loss2 = 1-outputs['loss2']
loss = loss1 + loss2
My question is, is it correct to subtract loss2
from 1? in this way it increases instead of decreasing.