Questions tagged [autograd]

Autograd can automatically differentiate native Python and Numpy code and is also used by the deep learning framework PyTorch. It can handle a large subset of Python's features, including loops, ifs, recursion and closures, and it can also take derivatives of derivatives of derivatives. The main intended application of Autograd is gradient-based optimization.

362 questions
1
vote
0 answers

Automatic Differentiation: PyTorch vs. Tensorflow

I am creating a simple function that simulates N paths of Geometric Brownian Motions (GBM) with M discretization (M+1 if you include the starting point). My function is the returning the values of the GBM at the last step (but I simulate the entire…
Landscape
  • 249
  • 1
  • 2
  • 13
1
vote
1 answer

Add variables/weights dynamically in tensorflow custom keras layers

Is it possible to add new variables/weights in custom layers during training ? I want this feature (or something similar) to implement an expandable embedding layer. I have tried tf.py_function, it failed to track the newly added weights, raised an…
user416983
  • 974
  • 3
  • 18
  • 28
1
vote
0 answers

How to calculate the Jacobian matrix of neural network faster in pytorch?

I want to calculate the Jacobian matrix and Hessian matrix of the neural network in pytorch. I know you can use the vmap and jacrev functions under the func module, but it is much slower than the oracle function: from torch.func import vmap,…
Frank Tian
  • 11
  • 1
1
vote
1 answer

Python Autograd Confusion

I am currently trying to train a simple feed forward neural net to solve the very simple differential equation dy/dx = 2 on [-1,1] If we consider the neural net to be the function NN(x), I have set my loss to be MSE(NN'(x) - 2, 0), but I am having…
mimo
  • 11
  • 2
1
vote
0 answers

How can I compute the gradient of multiple outputs with respect to a batch of inputs in a single backward pass in PyTorch?

I am performing multi-label image classification in PyTorch, and would like to compute the gradients of all outputs at ground truth labels for each input with respect to the input. I would preferably like to do this in a single backward pass for a…
1
vote
1 answer

Directly access derivative of primitive functions in PyTorch

For the backpropagation in PyTorch, many gradients of simple, functions are of course already implemented. But what if I want to have a function that evaluate the gradient of an existing primitive function directly, e.g. the derivative of…
Fabricio
  • 148
  • 1
  • 11
1
vote
0 answers

Can we get the Jacobian of functions with vector inputs in PyTorch?

My goal is to understand how to use torch.autograd.grad. Documentation gives a description as torch.autograd.grad(outputs, inputs, grad_outputs=None, retain_graph=None, create_graph=False, only_inputs=True, allow_unused=False,…
1
vote
1 answer

How to compute the gradient of the output with respect to each input in pytorch

I have a tensor of shape (number_of rays, number_of_points_per_ray, 3), let’s call it input. input is passed through a model and some processing (all of this is differentiable), let’s call this process inference. Finally, we get output =…
1
vote
1 answer

How to "manually" apply your gradients in Pytorch?

what would be the equivalent in Pytorch of the following in tensorflow, where loss is the calculated loss in the iteration of the network and net is the Neural Network. with tf.GradientTape() as tape: grads = tape.gradient(loss,…
KO4all
  • 39
  • 4
1
vote
2 answers

Inplace operation error in control problem

I'm new to pytorch and I'm having a problem with some code to train a a neural network to solve a control problem. I use the following code to solve a toy version of my problem: # SOME IMPORTS import torch import torch.autograd as autograd from…
Genoveffo
  • 15
  • 3
1
vote
0 answers

A faster Hessian vector product in PyTorch

I need to take a Hessian vector product of a loss w.r.t. model parameters a large number of times. It seems that there is no efficient way to do this and a for loop is always required, resulting in a large number of independent autograd.grad calls.…
Thomas Wagenaar
  • 6,489
  • 5
  • 30
  • 73
1
vote
1 answer

Can't fix torch autograd runtime error: UNet inplace operation

I can't fix the runtime error "one of the variables needed for gradient computation has been modified by an inplace operation. I know, that if I comment out loss.backward() the code will run, but I don't get in which order should I call the…
Alex
  • 340
  • 2
  • 13
1
vote
1 answer

How to implement a custom forward/backward function for torch.autograd.Function?

I would like to use pytorch to optimize a objective function which makes use of an operation that cannot be tracked by torch.autograd. I wrapped such operation with a custom forward() of the torch.autograd.Function class (as suggested here and…
Domenico
  • 126
  • 13
1
vote
0 answers

Loss depends on the gradients of the outputs using pytorch

I would like calculate the loss in the following form: where u_bc and \hat{u}_bc are the predicted and exact values of x_1, u''_r and \hat{u}''_r are the predicted and exact second derivatives of the output from x_2. x_1 and x_2 are different…
user123
  • 231
  • 2
  • 12
1
vote
1 answer

How to see which indices of an input effected an index of output

I want to test my neural network. For example, given: an input tensor input, a nn.module with some submodules module, an output tensor output, I want to find which indices of input effected the index (1,2) of output More specifically, given: Two…
Inyoung Kim 김인영
  • 1,434
  • 1
  • 17
  • 38