0

I have a simple two layer feedforward neural network, whose layers are fully customized. I encompass this into a NN(nn.Module). When I run the below code:

model=NN(*params)


optimizer = torch.optim.Adam(model.parameters(), lr=1)

x,y=dataset[0]
out=model(x.unsqueeze(0))

loss_fn=nn.CrossEntropyLoss()
loss=loss_fn(out, y)
loss.backward()
optimizer.step()
optimizer.zero_grad()

# model.zero_grad()
x,y=dataset[1]
out=model(x.unsqueeze(0))
loss=loss_fn(out, y)
loss.backward()

But I get the error RuntimeError: Trying to backward through the graph a second time (or directly access saved tensors after they have already been freed). Saved intermediate values of the graph are freed when you call .backward() or autograd.grad(). Specify retain_graph=True if you need to backward through the graph a second time or if you need to access saved tensors after calling backward..

This says that I'm calling backward pass on the graph a second time even though it has been freed. Indeed, calling loss.backward() frees the graph. optimizer.step() updates the gradients and optimizer.zero_grad() zeros the gradients. Then out=model.forward(x.unsqueeze(0)) should initiate the graph again and thus I was not expecting an error in the second backward pass.

I have looked into the following but have not been able to understand the source of the error:

I can't understand this. There is no recurrence (that I'm aware of!) in here. It's just data going in layer1 and that one to layer2. If the problem resides in the inner layers, what could be causing it? I'm at a loss.

Bidon
  • 221
  • 1
  • 12

0 Answers0