Autograd can automatically differentiate native Python and Numpy code and is also used by the deep learning framework PyTorch. It can handle a large subset of Python's features, including loops, ifs, recursion and closures, and it can also take derivatives of derivatives of derivatives. The main intended application of Autograd is gradient-based optimization.
Questions tagged [autograd]
362 questions
2
votes
1 answer
torch.nn.DataParallel with torch.autograd.grad in loss function fails
I have a neural network model that represents the surface of an object. For this to work, the gradients are calculated in the loss function (because for example it's a property of signed distance fields (sdfs) that the gradient is always unit…

Elyora
- 21
- 3
2
votes
2 answers
Trouble with minimal hvp on pytorch model
While autograd's hvp tool seems to work very well for functions, once a model becomes involved, Hessian-vector products seem to go to 0. Some code.
First, I define the world's simplest model:
class SimpleMLP(nn.Module):
def __init__(self, in_dim,…

user650261
- 2,115
- 5
- 24
- 47
2
votes
1 answer
Extending Pytorch: Python vs. C++ vs. CUDA
I have been trying to implement a custom Conv2d module where grad_input (dx) and grad_weight (dw) are calculated by using different grad_output (dy) values. I implemented this by extending torch.autograd as in Pytorch tutorials.
However I am…

How_To
- 27
- 2
- 10
2
votes
0 answers
how to get gradient of multiple outputs w.r.t. each input in a pytorch network?
Assume that I have a simple neural network with 3 inputs x and 2 outputs y. the training samples would like to be :
x = torch.Tensor([[1,2,3],[4,5,6],[7,8,9]]) # each row is a sample
# [[a1,a2,a3],[b1,b2,b3],[c1,c2,c3]]
I want to get dy/dx for each…

abmin
- 133
- 2
- 12
2
votes
2 answers
Lack of gradient when creating tensor from numpy
Can someone please explain to me the following behavior?
import torch
import numpy as np
z = torch.tensor(np.array([1., 1.]), requires_grad=True).float()
def pre_main(z):
return z * 3.0
x = pre_main(z)
x.backward(torch.tensor([1.,…

user650261
- 2,115
- 5
- 24
- 47
2
votes
1 answer
PyTorch error in trying to backward through the graph a second time
I'm trying to run this code: https://github.com/aitorzip/PyTorch-CycleGAN
I modified only the dataloader and transforms to be compatible with my data.
When trying to run it I get this error:
Traceback (most recent call last):
File…

Jarartur
- 149
- 1
- 10
2
votes
0 answers
Can OpenMDAO co-operate with autograd or jax?
Could the autograd or jax packages be used to generate the equivalent of analytic derivatives for OpenMDAO explicit components? i.e. something more accurate than finite differences (or perhaps more accurate or more general than the complex step…

Jacob Schwartz
- 83
- 8
2
votes
1 answer
PyTorch autograd: dimensionality of custom function gradients?
Question summary: How is the dimensionality of inputs and outputs handled in the backward pass of custom functions?
According to the manual, the basic structure of custom functions is the following:
class MyFunc(torch.autograd.Function):
…

Minze
- 113
- 5
2
votes
1 answer
Pytorch autograd: Make gradient of a parameter a function of another parameter
In Pytorch, how can I make the gradient of a parameter a function itself?
Here is a simple code snippet:
import torch
def fun(q):
def result(w):
l = w * q
l.backward()
return w.grad
return result
w =…

ApPs
- 33
- 5
2
votes
1 answer
PyTorch not updating weights when using autograd in loss function
I am trying to use the gradient of a network with respect to its inputs as part of my loss function. However, whenever I try to calculate it, the training proceeds but the weights do not update
import torch
import torch.optim as optim
import…

wil3
- 2,877
- 2
- 18
- 22
2
votes
0 answers
Using Autograd .backward() function to calculate an intermediate value in the forward pass of Pytorch model
Hello I am new to Pytorch. I have a simple pytorch module where the output of the module is scalar loss function that depends on the derivative of some polynomial functions. Let's say the output of the forward pass is: input*derivative(x^2+y^2).
One…

somio sasa
- 51
- 2
2
votes
1 answer
Computing matrix derivatives with torch.autograd.grad (PyTorch)
I am trying to compute matrix derivatives in PyTorch using torch.autograd.grad however I am running into few issues. Here is a minimal working example to reproduce the error.
theta = torch.tensor(np.random.uniform(low=-np.pi, high=np.pi),…

CodeEnthusiast
- 165
- 12
2
votes
0 answers
Pytorch Autograd: Can't disable profiler when it's not running
I have an issue when trying to profile my model. It showed that it can't disable profiler when it is not running.
I have tried many ways, but I could not find how to fix that error. Could someone give me a hint on could I proceed with this?
P/s:…

KHOA LAI
- 21
- 3
2
votes
1 answer
pytorch versus autograd.numpy
What are the big differences between pytorch and numpy, in particular, the autograd.numpy package? ( since both of them can compute the gradient automatically for you.)
I know that pytorch can move tensors to GPU, but is this the only reason for…

Xinglong Li
- 21
- 2
2
votes
0 answers
How do I make a new-style autograd function with static forward method.?
The error is :
Traceback (most recent call last):
File "", line 39, in
frame = detect(frame, net.eval(), transform)
File "", line 14, in detect
y = net(x)
File…

Kushagra Pal
- 124
- 5