0

When I run the program I get this error:

RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn

However I had set gen_y = torch.tensor(gen_y,requires_grad=True), but this has not helped, gen_y.grad_fn is None. And I also try x = torch.tensor(x,requires_grad=True), it is not working either. I guess it could be a problem related to version of pytorch. How can I solve this problem?

    def training(self, net, datasets):
        """
          input:
            net: (object) model & optimizer
            datasets : (list) [train, val] dataset object
        """
        args = self.args
        net.model.train()
        steps = len(datasets[0]) // args.batch_size
        if args.trigger == 'epoch':
            args.epochs = args.terminal
            args.iters = steps * args.terminal
            args.iter_interval = steps * args.interval
        else:
            args.epochs = args.terminal // steps + 1
            args.iters = args.terminal
            args.iter_interval = args.interval

        train_loss, train_acc = 0, 0
        start = time.time()
        for epoch in range(1, args.epochs + 1):
            self.epoch = epoch
            # setup data loader
            data_loader = DataLoader(datasets[0], args.batch_size, num_workers=4,
                                     shuffle=True)
            batch_iterator = iter(data_loader)
            for step in range(steps):
                self.iter += 1
                if self.iter > args.iters:
                    self.iter -= 1
                    break
                # convert numpy.ndarray into pytorch tensor
                x, y = next(batch_iterator)
                x = Variable(x)
                #None
                y = Variable(y)
                if args.cuda:
                    x = x.cuda()
                    y = y.cuda()
                # training
                x = torch.tensor(x,requires_grad=True)
                gen_y = net.model(x)
                gen_y = torch.tensor(gen_y,requires_grad=True)
                print(gen_y.requires_grad)
                print(gen_y.grad_fn)
                if self.is_multi:
                    gen_y = gen_y[0]
                    y = y[0]
                loss = F.binary_cross_entropy(gen_y, y)
                # Update generator parameters

                net.optimizer.zero_grad()
                loss.backward()
theduck
  • 2,589
  • 13
  • 17
  • 23
obel
  • 1
  • 1
  • 1
  • please [format](https://stackoverflow.com/help/formatting) your code. – Shai Dec 02 '19 at 11:02
  • get rid of the line `gen_y = torch.tensor(...` that will definitely break your computation graph. Assuming that's not the only problem then there is probably something in your model that's breaking the computation graph. We would need to see the details of your model to help further. – jodag Dec 02 '19 at 14:33

1 Answers1

0

I had the same error, requires_grad = True, did not work. If you want to be able to backward through your first call to .grad (to get gradients for your gradient penalty), you need to give it create_graph=True. I believe the error you mentioned is not the complete error, if your error is:

> usr/local/lib/python3.6/dist-packages/torch/autograd/__init__.py in.
    backward(tensors, grad_tensors, retain_graph, create_graph, grad_variables)
         97     Variable._execution_engine.run_backward(
               tensors, grad_tensors, retain_graph, create_graph,
    ---> 99         allow_unreachable=True)  # allow_unreachable flag

Than go to the "tensor.py" file and change create_graph = False to create_graph = True