I am working on a project where I am getting some funny error with the automatic differentiator in Pytorch.
I am trying to minimize a function with respect to x values. To do so, I use the code at the bottom of this post. As I understand it, I should be able to make an initial guess, set the requires_grad flag to true and run the forward pass (scores = alpha(Xsamples, model, robustness)) and then get the gradients with scores.backward() and then update my initial guess accordingly with optimizer.step. However, when I try running this I get the following error 'RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn' which I don't understand because I have set my initial guess to requires gradient. I tried looking on forums for help but most of the answers were regarding training neural networks so their fixes did not work in this case. Any guidance on this would be greatly appreciated, thank you.
epoch = 100
learning_rate = 0.01
N = 1
Xsamples = torch.randn(1,2)
Xsamples.requires_grad = True
optimizer = torch.optim.SGD([Xsamples], lr = learning_rate)
for i in range(epoch):
scores = alpha(Xsamples, model, robustness)
scores.backward() #dscore/ dx
optimizer.step()
optimizer.zero_grad()
return Xsamples