Questions tagged [autodiff]

Automatic Differentiation (AD) is a set of techniques based on the mechanical application of the chain rule to obtain derivatives of a function given as a computer program.

Automatic Differentiation (AD) is a set of techniques based on the mechanical application of the chain rule to obtain derivatives of a function given as a computer program.
AD exploits the fact that every computer program, no matter how complicated, executes a sequence of elementary arithmetic operations such as additions or elementary functions such as exp().
By applying the chain rule of derivative calculus repeatedly to these operations, derivatives of arbitrary order can be computed automatically, and accurate to working precision.

Conceptually, AD is different from symbolic differentiation and approximations by divided differences.

AD is used in the following areas:

  • Numerical Methods
  • Sensitivity Analysis
  • Design Optimization
  • Data Assimilation & Inverse Problems

Home page: http://www.autodiff.org/

94 questions
1
vote
1 answer

Wrong template arguments for Eigen::Spline with Eigen::AutoDiff

The Solution from the EDIT is now posted as answer. Old Question I want to include spline interpolation of Eigen::Spline into a larger formula and want to determine the derivative of this formula with the help of Eigen::AutoDiff. I tried the…
Tobias
  • 5,038
  • 1
  • 18
  • 39
0
votes
0 answers

boost autodiff: derivative functions on two or more variables?

I'm trying to write the derivative of a two-argument function double response(double x, double y) using the boost autodiff API. This function returns the value of a matrix cell indexed by its arguments, therefore it cannot be made…
xperroni
  • 2,606
  • 1
  • 23
  • 29
0
votes
0 answers

Pytorch Calculate Gradient After Slicing

Given a neural network, I want to calculate the gradient of the output with respect to one part of the input using Pytorch's torch.autograd.grad. However, I get a runtime error when I try to call the function saying that the differentiated tensors…
0
votes
1 answer

Julia ForwardDiff no method matching error

I'm trying to implement the automatic difference method by using the ForwardDiff packages. using LinearAlgebra import ForwardDiff function OLS(X,Y,beta) f = (Y - X*beta)'*(Y - X*beta) end n = 100 beta = [1.0, 2.2] X = [ones(n) rand(n)] Y =…
zlqs1985
  • 509
  • 2
  • 8
  • 25
0
votes
0 answers

Computing gradient of loss wrt only a part of input without tracking intermediate layers using pytorch autograd

I am trying to compute the gradient of loss with respect to only a part of input without tracking intermediate layers using pytorch autograd. The following explains the details. My input X is: X_{t}=[a1t, a2t, a3t] My neural network (NN_t)…
baagee
  • 1
  • 1
0
votes
1 answer

How to implement graph data structure consisting of ndarray arrays?

I wanted to implement some computational graph data structure in Rust (that can hopefully support autodiff), and I decided to use the ndarray crate, since that is most similar to NumPy. Making a graph data structure isn't really all that difficult,…
nekechs
  • 1
  • 1
0
votes
0 answers

Differentiation with GradientTape of Tensorflow in 3 dimensions

I am studying PDE using PINN and constructed a code with Tensorflow. Morespecific, I deal with Heat equation in 3-dimensional spaces (2 dim spaces + 1 dim time). However, the results are not very good. I guess there is a problem with the twice…
himath
  • 101
  • 1
0
votes
0 answers

Meta-Gradients / Multi-Batch Backpropagation in tansorflow

I am trying to implement a meta-gradient based pruning-at-initialization method by Alizadeh et al. (2022) in tensorflow. The method works roughly like this: Take some batches from the dataset. Mask all weights of the network with ones (e. g.…
0
votes
0 answers

How to break down a mesh distance minimization problem?

I'm having trouble solving a problem using "Ceres" and I could use some help! To simplify the problem: Imagine I have a mesh "A" that I want to scale and rotate (with scale + rotation represented as variables!) to be as close to a mesh "B" as…
IApp
  • 667
  • 1
  • 6
  • 18
0
votes
1 answer

Tensorflow - constructing a tensor from particular values extracted from two different tensors

I'm trying to construct a single tensor using values from two different tensors and an array of two dimensional indices, in a manner compatible with TensorFlow autodiff. In a first step I want to extract the elements of a tensor D of shape (n,n)…
user3131493
  • 53
  • 1
  • 7
0
votes
0 answers

Autodiff Error, error: no matching function for call to '__apply_tuple_impl' _VSTD::__apply_tuple_impl

I am trying to implement some auto-differentation jacobians for verification of analytical jacobians. These are the functions in question being tested. When I compile them with g++ I get the following error: In file included from…
sokato
  • 354
  • 4
  • 14
0
votes
1 answer

pytorch sets grad attribute to none if I use simple minus instead of -=

This is a simple code to show the problem import torch X = torch.arange(-3, 3, step=0.1) Y = X * 3 Y += 0.1 * torch.randn(Y.shape) def my_train_model(iter): w = torch.tensor(-15.0, requires_grad=True) lr = 0.1 for epoch in range(iter): …
0
votes
0 answers

Partial derivatives of neural network output with respect to inputs

I have trained a deep neural network for regression, with 2 input neurons, 1 output neuron and some hidden layers, as in the following (Tensorflow 2): import numpy as np from tensorflow.keras.layers import Dense, Input from tensorflow.keras.models…
Gio
  • 77
  • 9
0
votes
1 answer

JuMP: Issues of expecting float64, typeError: in typeassert, expected Float64, got ForwardDiff.Dual with autodiff = true and problems with exp()

So I tried to make a minimum example to ask questions based on a more complicated piece of code I have written: A HUGE common error I'm getting is expecting float64 and instead got ForwardDiff.Dual - can someone give me a tip how in general I…
Eigenvalue
  • 1,093
  • 1
  • 14
  • 35
0
votes
1 answer

Custom gradient with complex exponential in tensorflow

As an exercise I am trying to build a custom operator in Tensorflow, and checking the gradient against Tensorflow's autodiff of the same forward operation composed of Tensorflow API operations. However, the gradient of my custom operator is…
rassi
  • 375
  • 3
  • 8