If I understood your question correctly, you have a point x,y,z in space, and you would like to calculate the error by interpolating from the known stations. You also suggest that the validity of an error reading depends on the distance from the point where the error is known.
So, for a point x,y,z you can calculate its distance from each of the known receiving stations. Then you have some weight function calculated from each of these distances. Finally, you take a weighted average by the weight functions (or possibly do some other trickery to eliminate outliers).
How about this:
# known station coordinates (number of rows matching number of stations)
coords = array([
(x1, y1, z1),
(x2, y2, z2),
... ])
# respective error values (number of items matching number of stations)
err_values = array([
e1,
e2),
... ])
# p is a three-element array representing our coordinates
# distances will contain the spatial euclidian distances to the stations
distances = numpy.linalg.norm(coords - p[None,:], axis=1)
# get the weights somehow
weights = my_weight_function(distances)
# return the weighted average
return numpy.average(err_values, weights=weights)
There is one more trick which might be useful especially in this case. The last statement could be replaced by:
return numpy.sum(err_values * weights) / (eps + numpy.sum(weights))
Basically a weighted sum, but a small number eps
added to the denominator. The point here is that as we talk about an error, it should be zero very far away from the known points. Otherwise we would often have an average of the known errors as the error on the other side of the globe, which is not reasonable. The only reasonable assumption is that the error is zero far away. It is not, but we do not know any better, and thus zero is the best guess.
If I understood your problem in a wrong way, let me know. (If you are thinking of the interpolation problem as a way to increase accuracy on the surface of the earth, you actually have a 2d problem on the surface of the globe, not a real 3D problem.)