0

For some function f(x), I want to find f'(x) and f''(x) using the following approximation:

[f(x+h) - f(x)] / h

Which values of h should I choose in each scenario? I know for f'(x) it should be h = sqrt(epsilon) where epsilon is the machine epsilon. but do I have to handle the h value differently for f''(x)? My guess is that the truncation and rounding error somewhat cancel out each other and results in this value.

How should I approximate the error that I will find for double precision (as an example)?

oldselflearner1959
  • 633
  • 1
  • 5
  • 22
  • 1
    (a) There is no universal answer. sqrt(epsilon) satisfies in some circumstances because it is small but leaves some room for precision. But even it needs to be scaled according to x. (b) Numerical differentiation is notoriously unstable. You must analyze the f and x you use. (c) You cannot expect rounding and truncation to cancel. – Eric Postpischil May 10 '18 at 12:14

1 Answers1

2

You should use something like h=sqrt(epsilon)*(1+abs(x)) as for large x the relative precision of h=sqrt(epsilon) will decrease.

In general the difference formula for the k-th derivative will have an evaluation error proportional to epsilon/h^k which adds to the theoretical error of order h^p of the approximation formula. The total error is minimal if both contributions are about equal, that is for h=epsilon^(1/(k+p)), which then needs to be scaled for the size of x.

Let x be of scale 1 in the following examples to avoid that scale. In your one-sided first derivative formula k=p=1, so h=epsilon^(1/2). If you take the symmetric difference formula, then k=1, p=2 and the optimal h is about epsilon^(1/3). If you approximate the second derivative using the symmetric second order difference formula, you get k=p=2, so h=epsilon^(1/4) is optimal, etc.

Lutz Lehmann
  • 25,219
  • 2
  • 22
  • 51
  • Note that for x=0, there is no real way to choose h, just because f(x) and f(cx) are same thing. –  May 10 '18 at 18:02