I'm walking through the autograd part of pytorch tutorials. I have two questions:
- Why do we need clone the
grad_output
and assign it tograd_input
other than simple assignment during backpropagation? - What's the purpose of
grad_input[input < 0] = 0
? Does it mean we don't update the gradient when input less than zero?
Here's the code:
class MyReLU(torch.autograd.Function):
@staticmethod
def forward(ctx, input):
"""
In the forward pass we receive a Tensor containing the input and return
a Tensor containing the output. ctx is a context object that can be used
to stash information for backward computation. You can cache arbitrary
objects for use in the backward pass using the ctx.save_for_backward method.
"""
ctx.save_for_backward(input)
return input.clamp(min=0)
@staticmethod
def backward(ctx, grad_output):
"""
In the backward pass we receive a Tensor containing the gradient of the loss
with respect to the output, and we need to compute the gradient of the loss
with respect to the input.
"""
input, = ctx.saved_tensors
grad_input = grad_output.clone()
grad_input[input < 0] = 0
return grad_input
Thanks a lot in advance.