The following is Python code aiming to calculate the derivative of a given function f
.
Version one (Solution)
x[ix] += h # increment by h fxh = f(x) # evalute f(x + h) x[ix] -= 2 * h fxnh = f(x) x[ix] += h numgrad = (fxh - fxnh) / 2 / h
Version two (my version)
fx = f(x) # evalute f(x) x[ix] += h fxh = f(x) # evalute f(x+h) x[ix] -= h numgrad = (fxh - fx) / h
It has shown version one gives a better accuracy, could anyone explain why it is the case, what's the difference between the two calculations?
UPDATES I didn't realize it's a mathematical problem at the first place, I thought it was a problem relating to effects of floating accuracy. As suggested by MSeifert, I do agree that float-point noise matters, small magnitude result is more susceptible when exposing to the noise.