0

Given a neural network, I want to calculate the gradient of the output with respect to one part of the input using Pytorch's torch.autograd.grad. However, I get a runtime error when I try to call the function saying that the differentiated tensors are not being used in the computational graph.

I would like to be able to call the following:

with torch.enable_grad():
    x.requires_grad(True)
    score = score_model(x) #where score_model is a pretrained neural_net
    x_grad = torch.autograd.grad(score,x[:,:,1,1])[0]

The error is as follows:

x_grad = torch.autograd.grad(score,x[:,:,1,1])[0] RuntimeError: One of the differentiated Tensors appears to not have been used in the graph. Set allow_unused=True if this is the desired behavior.

At first, I thought my architecheture was set up in such a way that the computational graph did not flow to a specific part of my input. However, even when I slice to all the inputs I still get the same error.

with torch.enable_grad():
    x.requires_grad(True)
    score = score_model(x) #where score_model is a pretrained neural_net
    x_grad = torch.autograd.grad(score,x[:,:,:,:])

x_grad = torch.autograd.grad(score,x[:,:,:,:])[0] RuntimeError: One of the differentiated Tensors appears to not have been used in the graph. Set allow_unused=True if this is the desired behavior.

However, when I do not using tensor slicing, I do not encounter a problem. ie

with torch.enable_grad():
    x.requires_grad(True)
    score = score_model(x) #where score_model is a pretrained neural_net
    x_grad = torch.autograd.grad(score,x)

runs perfectly fine. Is there a different way to calculate the gradient to just part of the input tensor using torch.autograd.grad?

0 Answers0