Autograd can automatically differentiate native Python and Numpy code and is also used by the deep learning framework PyTorch. It can handle a large subset of Python's features, including loops, ifs, recursion and closures, and it can also take derivatives of derivatives of derivatives. The main intended application of Autograd is gradient-based optimization.
Questions tagged [autograd]
362 questions
0
votes
1 answer
Autograd typeerror when finding gradient of scipy.stat.norm.pdf(x)
I want to find a simple gradient of the normal distribution pdf with scipy.stats.norm using autograd in Python.
import scipy.stats as stat
import autograd.numpy as np
from autograd import grad
def f(x):
return stat.norm.pdf(x, 0.0,…

L.L.
- 1
- 2
0
votes
1 answer
Gradient using Autograd package in Python
I am trying to replicate this standard example using Autograd package.
While I am able to replicate other examples from this repository, this particular example throws an error as follows:
…

honeybadger
- 1,465
- 1
- 19
- 32
0
votes
1 answer
Most efficient way to reduce-sum a numpy array (with autograd)
I have two arrays:
index = [2,1,0,0,1,1,1,2]
values = [1,2,3,4,5,4,3,2]
I would like to produce:
[sum(v for i,v in zip(index, values) if i == ui) for i in sorted(set(index))]
in the most efficient way possible.
my values are computed via…

Labo
- 2,482
- 2
- 18
- 38
0
votes
1 answer
Avoiding array assignment in autograd
I understand from the autograd tutorial that array assignment is not supported when the array is contained in the objective to be differentiated. However, I currently have the following objective function in my code which I would like to…

p-value
- 608
- 8
- 22
0
votes
1 answer
Cryptic error when using HIPS autograd with numpy.piecewise (ValueError: setting an array element with a sequence.)
I want to use HIPS autograd (https://github.com/HIPS/autograd) in Python 2.7 (in Jupyter notebook) to find a parameter x. My forward model (observations at given time points t as a function of the parameter x) is a piecewise function of t.…
0
votes
1 answer
PyTorch - modifications of autograd variables
In my PyTorch program, I have a matrix which is continuously updated over runtime.
I wonder how to perform this update. I tried using something like this:
matrix[0, index] = hidden[0]
Both matrix and hidden are autograd Variables. When using the…

MBT
- 21,733
- 19
- 84
- 102
0
votes
1 answer
Custom loss function does not minimize in PyTorch
I am using a PyTorch code to train over a custom loss function in an unsupervised setting.However, the loss doesn't go down and stays the same over may epochs during the training phase. Please see the training code snippet below:
X = np.load(

Vishal
- 227
- 1
- 2
- 8
0
votes
0 answers
Issues with Derivative Calculator Autograd
I am using Autograd, a wrapper for numpy that differentiates functions. The output is $f^\prime$. An example would be tanh_prime = grad(np.tanh) would return the first derivative of tanh.
Whenever I apply the output of a grad call to an array, I get…

Abraham Horowitz
- 78
- 2
- 8
-1
votes
1 answer
The running time per epoch keeps increasing when using code torch.autograd.grad()
I am using torch.autograd.grad() function to calculate the grads for two loss functions (which is used to balance the weight of these two loss),
loss1_grads = torch.autograd.grad(loss1, model.parameters(), retain_graph=True)
loss2_grads =…

Knotnet
- 117
- 4
-1
votes
1 answer
Why there are two different flags to disable gradient computation in PyTorch
I am an intermediate learner in PyTorch and in some recent cases, I have seen people use the torch.inference_mode() instead of the famous torch.no_grad() while validating your trained agent in reinforcement learning (RL) experiments. I checked the…

Satya Prakash Dash
- 807
- 8
- 16
-1
votes
1 answer
Pytorch : Getting the output gradient w.r.t input with gpu
I could get the output gradient w.r.t input by changing the input like below.
But when I changed the device from cpu to gpu, it is not calculated(I got "None"). How can I get the gradient?
action_batch = torch.tensor(action_batch,…

김한결
- 13
- 1
- 5
-1
votes
1 answer
Error in running backwards() function in PyTorch
The code:
import numpy as np
predictors = np.array([[73,67,43],[91,88,64],[87,134,58],[102,43,37],[69,96,70]],dtype='float32')
outputs = np.array([[56,70],[81,101],[119,133],[22,37],[103,119]],dtype='float32')
inputs =…

Nirmalya Misra
- 55
- 5
-1
votes
1 answer
"ValueError: array is not broadcastable to correct shape" when using nested arrays in Autograd
I am using Autograd to compute the gradient of a float valued function. The function involves an array of arrays as arguments, and returns a float, and is quite complicated. A minimal example which produces this error is the function in the…

abcde799
- 1
- 1
-1
votes
1 answer
Where to start on creating a method that saves desired changes in a Tensor with PyTorch?
I have two tensors that I am calculating the Spearmans Rank Correlation from, and I would like to be able to have PyTorch automatically adjust the values in these Tensors in a way that increases my Spearmans Rank Correlation number as high as…

Fareed Mabrouk
- 67
- 1
- 9
-1
votes
1 answer
How to convert ndarray to autograd variable in GPU format?
I am trying to do something like this,
data = torch.autograd.Variable(torch.from_numpy(nd_array))
It comes under the type as Variable[torch.FloatTensor], But I need Variable[torch.cuda.FloatTensor] also I want to do this in pytorch version 0.3.0…

Arjun Sankarlal
- 2,655
- 1
- 9
- 18