I am attempting to normalize the outputs of my classifier that uses BCELossWithLogits
as part of its loss function. As far as I know, this implements Sigmoid function internally and outputs the loss.
I want to normalize the output of the sigmoid function prior to calculating the loss. Is it possible to use BatchNorm1d
with BCELossWithLogits
? Or is passing the output tensor to torch.sigmoid
to BatchNorm1d
and separately calculating BCELoss
the only possible solution?
Thanks.