-1

I could get the output gradient w.r.t input by changing the input like below. But when I changed the device from cpu to gpu, it is not calculated(I got "None"). How can I get the gradient?

    action_batch = torch.tensor(action_batch, requires_grad=True, 
    dtype=torch.float32).to(self.device)


    self.critic_optimizer.zero_grad()
    print(action_batch.grad)
    critic_loss.backward()
    print("================**")
    print(action_batch.grad)
    self.critic_optimizer.step()

Gradient with cpu

None
================**
tensor([[ 1.0538e-03, -1.6932e-04,  2.3841e-04,  9.9767e-04, -6.7008e-05,
      5.3555e-04],
    [-2.1002e-04, -3.2479e-05, -1.1147e-04, -1.9382e-04,  1.4175e-05,
     -1.0733e-04],
    [ 6.6836e-04, -1.9548e-05,  2.0143e-04,  3.8290e-04,  1.4578e-04,
      6.7998e-05],
    ...,
    [-9.7949e-05,  1.8074e-06, -2.8979e-05, -6.0738e-05, -1.3045e-06,
     -5.1929e-06],
    [-7.9130e-04,  2.3325e-04, -3.9635e-04, -1.1324e-03, -1.7819e-04,
     -2.4061e-04],
    [ 2.6802e-03,  1.1562e-05,  7.2858e-04,  1.7266e-03,  2.2337e-04,
      2.1766e-04]])

Gradient with gpu

None
================**
None
김한결
  • 13
  • 1
  • 5

1 Answers1

0

I solve this problem!!! action_batch =torch.FloatTensor(action_batch).to(self.device) action_batch.requires_grad = True

김한결
  • 13
  • 1
  • 5