It is not a problem.
First, note that 0 ≤ a < 1, so errors in the average tend to diminish, not accumulate. Incoming new data displaces old errors.
Subtracting floating-point numbers of similar magnitude (and same sign) does not lose absolute accuracy. (You wrote “precision”, but precision is the fineness with which values are represented, e.g., the width of the double
type, and that does not change with subtraction.) Subtracting numbers of similar magnitude may cause an increase of relative error: Since the result is smaller, the error is larger relative to it. However, the relative error of an intermediate value is of no concern.
In fact, subtracting two numbers, each of which equals or exceeds half the other, has no error: The correct mathematical result is exactly representable (Sterbenz’ Lemma).
So the subtraction in the latter operation sequence is likely to be exact or low-error, depending on how much the values fluctuate. Then the multiplication and the addition have the usual rounding errors, and they are not particularly worrisome unless there are both positive and negative values, which can lead to large relative errors when the average is near zero. If a fused multiply-add operation is available (See fma
in <tgmath.h>
), then you can eliminate the error from the multiplication.
In the former operation sequence, the evaluation of 1-a
will be exact if a
is at least ½. That leaves two multiplications and one addition. This will tend to have very slightly greater error than the latter sequence, but likely not enough to notice. As before, old errors will tend to diminish.