0

I am trying to show the truncation error of the second-order numerical differentiation is indeed giving us double precision than first-order numerical differentiation (with considering the machine error/round-off error eps())

Here is my code in Julia:

function first_order_numerical_D(f)
  function df(x)
    h = sqrt(eps(x))        
    (f(x+h) - f(x))/h
  end
  df
end

function second_order_numerical_D(f)
  function df(x)
    h = sqrt(eps(x))          
    (f(x+h) - f(x-h))/(2.0*h)
  end
  df
end

function analytical_diff_exp(x)
    return exp(x)
end
function analytical_diff_sin(x)
    return cos(x)
end
function analytical_diff_cos(x)
    return -sin(x)
end
function analytical_diff_sqrt(x)
    return 1/(2.0*sqrt(x))
end

function first_order_error_exp(x)
    return first_order_numerical_D(exp)(x) - analytical_diff_exp(x) 
end
function first_order_error_sin(x)
    return first_order_numerical_D(sin)(x) - analytical_diff_sin(x) 
end
function first_order_error_cos(x)
    return first_order_numerical_D(cos)(x) - analytical_diff_cos(x) 
end
function first_order_error_sqrt(x)
    return first_order_numerical_D(sqrt)(x) - analytical_diff_sqrt(x) 
end

function second_order_error_exp(x)
    return second_order_numerical_D(exp)(x) - analytical_diff_exp(x)
end
function second_order_error_sin(x)
    return second_order_numerical_D(sin)(x) - analytical_diff_sin(x)
end
function second_order_error_cos(x)
    return second_order_numerical_D(cos)(x) - analytical_diff_cos(x) 
end
function second_order_error_sqrt(x)
    return second_order_numerical_D(sqrt)(x) - analytical_diff_sqrt(x) 
end

function round_off_err_exp(x)
    return 2.0*sqrt(eps(x))*exp(x)
end
function round_off_err_sin(x)
    return 2.0*sqrt(eps(x))*sin(x)
end
function round_off_err_cos(x)
    return 2.0*sqrt(eps(x))*cos(x)
end
function round_off_err_sqrt(x)
    return 2.0*sqrt(eps(x))*sqrt(x)
end

function first_order_truncation_err_exp(x)
    return abs(first_order_error_exp(x)+round_off_err_exp(x))
end
function first_order_truncation_err_sin(x)
    return abs(first_order_error_sin(x)+round_off_err_sin(x))
end
function first_order_truncation_err_cos(x)
    return abs(first_order_error_cos(x)+round_off_err_cos(x))
end
function first_order_truncation_err_sqrt(x)
    return abs(first_order_error_sqrt(x)+round_off_err_sqrt(x))
end

function second_order_truncation_err_exp(x)
    return abs(second_order_error_exp(x)+0.5*round_off_err_exp(x))
end
function second_order_truncation_err_sin(x)
    return abs(second_order_error_sin(x)+0.5*round_off_err_sin(x))
end
function second_order_truncation_err_cos(x)
    return abs(second_order_error_cos(x)+0.5*round_off_err_cos(x))
end
function second_order_truncation_err_sqrt(x)
    return abs(second_order_error_sqrt(x)+0.5*round_off_err_sqrt(x))
end

This should give me the right truncation error if I subtract (here I use add, because the actual Taylor expansion shows that both the round-off error and the truncation error have a negative sign in front of them) the round_off_err_f term.

For analytical derivation/proof see: https://www.uio.no/studier/emner/matnat/math/MAT-INF1100/h10/kompendiet/kap11.pdf http://www2.math.umd.edu/~dlevy/classes/amsc466/lecture-notes/differentiation-chap.pdf

But the results shows that:

first_order_truncation_err_exp(0.5), first_order_truncation_err_sin(0.5), first_order_truncation_err_cos(0.5), first_order_truncation_err_sqrt(0.5)

(4.6783240139052204e-8, 1.2990419187857229e-8, 2.8342226290287478e-9, 4.364449135429996e-9)

second_order_truncation_err_exp(0.5), second_order_truncation_err_sin(0.5), second_order_truncation_err_cos(0.5), second_order_truncation_err_sqrt(0.5)

(1.8874426561390482e-8, 7.938850300905947e-9, 4.1240999200086055e-9, 7.45058059692383e-9)

Where:

eps(0.5)=1.1102230246251565e-16

The second_order_truncation_err_f() should be around the order of 1e-16 rather than 1e-8, I don't know why this doesn't work.

Tack_Tau
  • 628
  • 1
  • 6
  • 12

1 Answers1

0

That is because what you compute is dominated by the round-off error. I.e.:

julia> round_off_err_sqrt(0.5)
1.490116119384766e-8

And in order to see the difference between your "second order" derivatives (usually I see the term central difference for that) you need to chose a larger stepsize. In the literature h = cbrt(eps()) is usually seen.

function second_order_numerical_D(f)
    function df(x)
        h = cbrt(eps(x))          
        (f(x+h/2) - f(x-h/2))/h
    end
    df
end
julia> second_order_error_exp(0.5)
1.308131380994837e-11
Lufu
  • 436
  • 4
  • 11
  • Note, that it still does not reach `1e-16`. That is, because of the subtractive cancellation error. You can avoid that by the complex step method or automatic differentiation – Lufu Oct 28 '19 at 14:42
  • I actually cancel out the round-off error by using ```first_order_truncation_err_f = abs(first_order_error_f(x)+round_off_err_f(x))``` This will take all the round-off error approximately as ```rand()*2.0*eps(x)/sqrt(eps(x))``` which is derived by the notes I posted in my question. And the aim of this question is not to using Automatic Differentiation. – Tack_Tau Oct 28 '19 at 14:42
  • `first_order_error_f(x)+round_off_err_f(x)` if you check the signs of these functions separately, you will find that they often have the same sign – Lufu Oct 28 '19 at 14:54
  • because the actual round-off error here is actually `-round_off_err_f(x)`, to be more specific `analytical_diff_f(x)-numerical_diff_f(x) = -truncation_err_f(x)-round_off_err_f(x)` so `abs(truncation_err_f(x)) = abs(analytical_diff_f(x)-numerical_diff_f(x)+round_off_err_f(x)) ` – Tack_Tau Oct 28 '19 at 15:20
  • If you want to show the scaling, I would suggest you plot your errors for decreasing step sizes, that will show the scaling nicely. I doubt that you will reach `1e-16` by finite differences calculations. – Lufu Oct 28 '19 at 15:37