I assume the prediction result is pred
and the corresponding label variable is label_face
. Because the variable label_face
contains a large amount of data imbalance in the segmentation problem. Therefore, I want to use the Dice Loss
function to replace nll_loss
in PyTorch.
pred = tensor([[-0.6813, -0.7052],
[-0.6467, -0.7419],
[-0.7436, -0.6451],
...,
[-0.5635, -0.8421],
[-0.6089, -0.7852],
[-0.7449, -0.6439]], device='cuda:0', grad_fn=<ViewBackward0>)
pred.shape --> torch.Size([7862, 2])
label_face = tensor([1, 1, 1, ..., 1, 1, 1], device='cuda:0')
label_face.shape --> torch.Size([7862])
loss = F.nll_loss(pred, label_face)
loss.backward()
And I tried the custom Dice Loss for binary segmentation function as follows:
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.cuda.amp as amp
## Soft Dice Loss for binary segmentation
## pytorch autograd
class SoftDiceLoss(nn.Module):
'''
soft-dice loss, useful in binary segmentation
'''
def __init__(self,
p=1,
smooth=1):
super(SoftDiceLossV1, self).__init__()
self.p = p
self.smooth = smooth
def forward(self, logits, labels):
'''
inputs:
logits: tensor of shape (N, H, W, ...)
label: tensor of shape(N, H, W, ...)
output:
loss: tensor of shape(1, )
'''
probs = torch.sigmoid(logits)
numer = (probs * labels).sum()
denor = (probs.pow(self.p) + labels.pow(self.p)).sum()
loss = 1. - (2 * numer + self.smooth) / (denor + self.smooth)
return loss
custom_loss = SoftDiceLoss()
loss = custom_loss(pred, label_face)
loss.backward()
I've really tried but can't do it or don't know where the error is. I am a beginner student, I hope you can help me with the code to customize this loss function. Any comments are highly appreciated. Thank you so much.