0
class pu_fc(nn.Module):

    def __init__(self, input_dim):
        super(pu_fc, self).__init__()
        self.input_dim = input_dim
        
        self.fc1 = nn.Linear(input_dim, 50)
        self.fc2 = nn.Linear(50, 2) 

        self.loss_fn = custom_NLL()

        device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
        self.bias = torch.autograd.Variable(torch.rand(1,1), requires_grad=True).to(device)

    def forward(self, x):
        out = self.fc1(x)
        out = F.relu(out, inplace=True)
        out = self.fc2(out)
        out[..., 1] = out[..., 1] + self.bias
        print('bias: ', self.bias)

        return out

As you can see from the code, I wanted to add a bias term to the second output channel. However, my implementation does not work. The bias term is not updated at all. It kept the same during training which I assume that it is not learnable during training. So the question is that how I can make the bias term learnable? Is it possible to do this? Below is some output of the bias during training. Any hint is grateful, thanks in advance!

bias:  tensor([[0.0930]], device='cuda:0', grad_fn=<CopyBackwards>)
bias:  tensor([[0.0930]], device='cuda:0', grad_fn=<CopyBackwards>)
bias:  tensor([[0.0930]], device='cuda:0', grad_fn=<CopyBackwards>)
bias:  tensor([[0.0930]], device='cuda:0', grad_fn=<CopyBackwards>)
bias:  tensor([[0.0930]], device='cuda:0', grad_fn=<CopyBackwards>)
Current Epoch: 1
Epoch loss:  0.4424589276313782
bias:  tensor([[0.0930]], device='cuda:0', grad_fn=<CopyBackwards>)
bias:  tensor([[0.0930]], device='cuda:0', grad_fn=<CopyBackwards>)
bias:  tensor([[0.0930]], device='cuda:0', grad_fn=<CopyBackwards>)
bias:  tensor([[0.0930]], device='cuda:0', grad_fn=<CopyBackwards>)
bias:  tensor([[0.0930]], device='cuda:0', grad_fn=<CopyBackwards>)
bias:  tensor([[0.0930]], device='cuda:0', grad_fn=<CopyBackwards>)
bias:  tensor([[0.0930]], device='cuda:0', grad_fn=<CopyBackwards>)
bias:  tensor([[0.0930]], device='cuda:0', grad_fn=<CopyBackwards>)
bias:  tensor([[0.0930]], device='cuda:0', grad_fn=<CopyBackwards>)
bias:  tensor([[0.0930]], device='cuda:0', grad_fn=<CopyBackwards>)
Current Epoch: 2
Epoch loss:  0.3476297199726105
bias:  tensor([[0.0930]], device='cuda:0', grad_fn=<CopyBackwards>)
bias:  tensor([[0.0930]], device='cuda:0', grad_fn=<CopyBackwards>)
bias:  tensor([[0.0930]], device='cuda:0', grad_fn=<CopyBackwards>)
bias:  tensor([[0.0930]], device='cuda:0', grad_fn=<CopyBackwards>)
bias:  tensor([[0.0930]], device='cuda:0', grad_fn=<CopyBackwards>)
bias:  tensor([[0.0930]], device='cuda:0', grad_fn=<CopyBackwards>)
Kevin Hu
  • 95
  • 2
  • 6

1 Answers1

2

The bias should be an nn.Parameter. Being a parameter means that it will show up in model.parameters() and also automatically be transferred to the specified device when calling model.to(device).

self.bias = nn.Parameter(torch.rand(1,1))

Note: Don't use Variable, it was deprecated with PyTorch 0.4.0, which was released over 2 years ago, and all of its functionality has been merged into the tensors.

Michael Jungo
  • 31,583
  • 3
  • 91
  • 84
  • Thanks for the explanation. Do you know how can I find which weight is corresponding to the bias term? I debugged using model.parameters() but I found many layers of parameters – Kevin Hu Jun 20 '20 at 14:50
  • 1
    Well, it's just `model.bias`. But you can use [`model.named_parameters()`](https://pytorch.org/docs/stable/nn.html#torch.nn.Module.named_parameters), so you get a tuple of *(name, param)*, where the name of the bias should be `bias`. – Michael Jungo Jun 20 '20 at 14:55