0
class crossentropy(nn.Module):
    def __init__(self):
        super(crossentropy, self).__init__()

    def forward(self, y_1, y):
        m = nn.Softmax(dim=1)
        output = m(y_1)
        loss = -1.0*torch.sum(y*torch.log(output))
        l = torch.mean(loss)
        return l
Mateen Ulhaq
  • 24,552
  • 19
  • 101
  • 135

1 Answers1

0

It is probable that using LogSoftmax instead of Softmax then log should solve this issue (probably caused by the log giving infinite values to near-0 results of softmax, due to numerical error)

tbrugere
  • 755
  • 7
  • 17
  • class crossentropy(nn.Module): def __init__(self,m=0): super(crossentropy, self).__init__() self.m = m def forward(self, y_1, y): m = self.m m = nn.LogSoftmax() output = m(y_1) loss = -1.0*torch.sum(y*output) l = torch.mean(loss) return l – abhilash sharma Dec 16 '21 at 03:48
  • It still gives the same error, I tried removing log and it gives good accuracy score and negative loss – abhilash sharma Dec 16 '21 at 03:52
  • Still Nan ? what is the output of the LogSoftmax ? – tbrugere Dec 16 '21 at 14:04