I'm trying to understand backpropagation in pytorch a bit better. I have a code snippet that successfully does backpropagation from the output d to the leaf variable a, but then if I add in a reshape step, the backpropagation no longer gives the input a gradient.
I know reshape is out-of-place, but I'm still not sure how to contextualize this.
Any thoughts?
Thanks.
#Works
a = torch.tensor([1.])
a.requires_grad = True
b = torch.tensor([1.])
c = torch.cat([a,b])
d = torch.sum(c)
d.backward()
print('a gradient is')
print(a.grad) #=> Tensor([1.])
#Doesn't work
a = torch.tensor([1.])
a.requires_grad = True
a = a.reshape(a.shape)
b = torch.tensor([1.])
c = torch.cat([a,b])
d = torch.sum(c)
d.backward()
print('a gradient is')
print(a.grad) #=> None