Questions tagged [autograd]

Autograd can automatically differentiate native Python and Numpy code and is also used by the deep learning framework PyTorch. It can handle a large subset of Python's features, including loops, ifs, recursion and closures, and it can also take derivatives of derivatives of derivatives. The main intended application of Autograd is gradient-based optimization.

362 questions
0
votes
1 answer

Autograd breaks np.empty_like

I'm trying to take the gradient of a function in which I assign numpy array elements individually (assigning local forces to a global force vector in an FEA), but this appears to break Autograd -- if I use np.zeros for the global array I get…
JoshuaF
  • 1,124
  • 2
  • 9
  • 23
0
votes
1 answer

Gradient flow stopped on a combined model

I meet with a problem that the gradient cannot backpropagate on a combined network. I checked lots of answers but cannot find a relevant solution to this problem. I would appreciate it so much if we can solve this. I wanted to calculate the gradient…
0
votes
0 answers

Using pytorch built in derivative as part of custom autograd function

I'm looking to implement a custom autograd. A function where the backward pass is a mix of a custom function and the derivative of a function which torch should be able to find by itself. For a simple example, say I wanted to create a function for y…
0
votes
0 answers

Optimization with autograd

I have a continuous-valued dataset {input, target}. The input (x) dimension is 224 and the target (y) dimension is 1. Values of y range between (0,1). There are only around 1000 data points. My objective is to maximize the function, y = f(x)…
0
votes
1 answer

Replicating in pytorch https://www.d2l.ai/chapter_linear-networks/linear-regression-scratch.html

I am trying to replicate the code in pytorch. However I am having some problems with the autograd function. I am having the following runtime error. RuntimeError: Trying to backward through the graph a second time The code is the following: for…
0
votes
1 answer

Computing Output Pixel-wise Gradient Norm in PyTorch

I am looking for an efficient way of computing \hat{x} of dimensions (b x c x h x w) defined per sample as: where x is the output of the same dimensions generated by a model with parameters \theta, and i,j: index the height and width of the 2D…
0
votes
1 answer

Using autograd on python wrapped C function with custom datatype

Can autograd in principle work on python wrapped C functions? The C function I'd like to differentiate expects arguments with a REAL8 data type, and I can successfully call it in python by giving it either float or np.float64 arguments. Upon closer…
0
votes
1 answer

Defining parameters by some transformation OR Retaining sub-graphs, but not the whole graph

I'm coming across an issue I haven't seen come up before. I work in Bayesian Machine Learning and as such make a lot of use of the distributions in PyTorch. One common thing to do is to define some of the parameters of distributions in terms of the…
Rabbitman14
  • 331
  • 1
  • 3
  • 13
0
votes
1 answer

Nonexistant pytorch gradients when dotting tensors in loss function

For the purposes of this MWE I'm trying to fit a linear regression using a custom loss function with multiple terms. However, I'm running into strange behavior when trying to weight the different terms in my loss function by dotting a weight vector…
0
votes
1 answer

How to compute gradient of power function wrt exponent in PyTorch?

I am trying to compute the gradient of out = x.sign()*torch.pow(x.abs(), alpha) with respect to alpha. I tried the following so far: class Power(nn.Module): def __init__(self, alpha=2.): super(Power, self).__init__() self.alpha =…
Ilia
  • 319
  • 2
  • 10
0
votes
1 answer

Strange behavior of Inception_v3

I am trying to create a generative network based on the pre-trained Inception_v3. 1) I fix all the weights in the model 2) create a Variable whose size is (2, 3, 299, 299) 3) create targets of size (2, 1000) that I want my final layer…
MegaNightdude
  • 161
  • 2
  • 8
0
votes
1 answer

How can I calculate the network gradients w.r.t weights for all inputs in PyTorch?

I'm trying to figure out how I can calculate the gradient of the network for each input. And I'm a bit lost. Essentially, what I would want, is to calculate d self.output/d weight1 and d self.output/d weight2 for all values of input x. So, I would…
AlphaBetaGamma96
  • 567
  • 3
  • 6
  • 21
0
votes
1 answer

Problems with autograd.hessian_vector_product and scipy.optimize.NonlinearConstraint

I'm trying to run a minimization problem using scipy.optimize, including a NonlinearConstraint. I really don't want to code derivatives myself, so I'm using autograd to do it. But even though I follow the exact same procedure for the arguments to…
Heshy
  • 382
  • 1
  • 10
0
votes
1 answer

Is it possible to use Autograd to compute the derivative of a neural network output with respect to one of its inputs?

I have a neural network model that outputs a vector Y of size approximately 4000 for about 9 inputs X. I am in need of computing the partial derivative of the output of Y with one or two of the inputs X_1 or X_2. I already have these derivatives,…
nanda
  • 63
  • 5
0
votes
1 answer

Unable to optimize function using pytorch

I am trying to write an estimator for a Structural Equation Model. So basically I start with the random parameters for the model B, gamma, phi_diag, psi. And using this I compute the implied covariance matrix sigma. And my optimization function f_ml…
Ankur Ankan
  • 2,953
  • 2
  • 23
  • 38