0

I want to calculate the second derivative of an anonymous function in Matlab. I'm already aware of some formulas for this (Numerical Differentiation) but they seem not to work.

I'm able to calculate the first derivative with :

f = @(x) (x^3);
h = 1e-10;

df = @(x) (f(x+h) - f(x))/h;

But when I try to calculate the second derivative with the following, I don't get the expected result :

f = @(x) (x^3);
h = 1e-10;

d2f = @(x) (f(x+h) - 2*f(x) + f(x-h))/(h^2);

For d2f I'm supposed to get a function similar to d2f = 6x, but if a plot d2f I get this : plot d2f

What I'm doing wrong?

Ulises
  • 21
  • 3
  • It would help if you provided information about what's not working. What are you trying, what result do you expect and what do you get instead? – Will Apr 10 '19 at 15:06
  • See also: https://stackoverflow.com/questions/22160712/derivative-of-anonymous-function/22160827 – horchler Apr 10 '19 at 15:13
  • 3
    The problem here is with the size of `h` and [catastrophic cancellation](https://en.wikipedia.org/wiki/Loss_of_significance) due to the numerical precision of floating point. Try `h=1e-7`, or larger. However, spending on where you evaluate the functions, you may still see lots of noise. – horchler Apr 10 '19 at 15:26
  • You can also increase the floating point precison of `h` by using vpa: `h = vpa(1e-10)`. You now have a 32 digits precision, so `h` should be able to be as small as `1e-16`. – obchardon Apr 10 '19 at 15:30
  • @horchler It worked! Thanks!! – Ulises Apr 10 '19 at 15:31
  • Have a look at https://en.wikipedia.org/wiki/Numerical_differentiation and check out the book _Numerical Receipes_. Your choice of _h_ is too small leading to roundoff errors and catastrophic cancellation. – kvantour Apr 10 '19 at 15:44
  • Possible duplicate of [Trying to implement the difference formula in MATLAB](https://stackoverflow.com/questions/54049833/trying-to-implement-the-difference-formula-in-matlab) – Cris Luengo Apr 10 '19 at 16:42

2 Answers2

3

The divided difference formula has a theoretical error of O(h^2). The floating point evaluation of the functions will each produce a relative error of about the machine precision mu. This is then divided by h^2. The best sum of both errors is achieved where they about balance, that is, where h^4=mu or h=1e-4.

This of course is not valid if the coefficient of the error term, which is the 4th derivative of f, is zero, as it happens with f(x)=x^3. Then the only error contribution is the floating point errors, which are smallest for larger h, even h=1 will give minimal error.

For a less trivial function like f(x)=sin(x) the error for different h behaves like in the following plot (where the variable labeled x is the step size h)

enter image description here

Lutz Lehmann
  • 25,219
  • 2
  • 22
  • 51
0

I'm not sure what you're doing incorrectly, but the code below works

f=@(x) x.^3;

x = (0:1E-12:1E-6)' ;
d2y = secondDerivative(f,x(1),x(end),x(2)-x(1))';

fit(x,d2y,'poly1')

ans = 

     Linear model Poly1:
     ans(x) = p1*x + p2
     Coefficients (with 95% confidence bounds):
       p1 =           6  (6, 6)
       p2 =   1.352e-15  (-4.734e-13, 4.761e-13)

Function definition

function d2y = secondDerivative(f, x1, x2, dx)

y = f(x1:dx:x2);

d2y = nan(size(y));
d2y(2:end-1) = y(1:end-2) - 2*y(2:end-1) + y(3:end);

if length(d2y) == 3
    d2y(1) = y(1) - 2*y(2) + y(3);
    d2y(2) = y(end-2) - 2*y(end-1) + y(end);
elseif length(d2y) > 4
    d2y(1) = 2*y(1) - 5*y(2) + 4*y(3) - y(4);
    d2y(end) = -y(end-3) + 4*y(end-2) - 5*y(end-1) + 2*y(end);
end

d2y = d2y / dx^2 ;
end
user1543042
  • 3,422
  • 1
  • 17
  • 31