I am currently trying to train a simple feed forward neural net to solve the very simple differential equation dy/dx = 2 on [-1,1]
If we consider the neural net to be the function NN(x), I have set my loss to be MSE(NN'(x) - 2, 0), but I am having trouble computing the gradient of the function NN(x) using autograd.
My main idea is that I have partitioned the interval into 100 subintervals:
X = torch.linspace(-1,1,N) #[-1, -0.98, -0.96...0.98, 1] #where N = 101
I then feed these elements -1,-0.98, .... into the neural net and then use the following code to check the gradients:
for epoch in range(num_epochs):
for j in range(N):
input = torch.tensor([float(X[j])], requires_grad=True)
output = model(input)
output.backward()
loss = criterion(input.grad, torch.tensor([2.0], requires_grad=True))
loss.backward()
optimiser.zero_grad()
optimiser.step()
The issue is that I am going backwards through the graph twice (output.backward, loss.backward) but I can't figure out another way to compute the numerical derivative of dNN(x)/dx.
Is there another way to go about this? It's my first SO post so please be kind :)