For some function f(x), I want to find f'(x) and f''(x) using the following approximation:
[f(x+h) - f(x)] / h
Which values of h should I choose in each scenario? I know for f'(x) it should be h = sqrt(epsilon) where epsilon is the machine epsilon. but do I have to handle the h value differently for f''(x)? My guess is that the truncation and rounding error somewhat cancel out each other and results in this value.
How should I approximate the error that I will find for double precision (as an example)?