First, that is the closest value to .0009
that can be represented as a double precision IEEE floating point number.
Every floating point operation can result in a rounding operation. Epsilon based strategies are not reliable to determine if two floating point calculations represent the "same" real number.
Determining if two calculations result in, or do not result in, the same real number in the general case is actually incomputable; you can reduce Halt to it.
The "best" most expensive general purpose way to determine if they could be equal is interval mathematics. Replace your doubles with a pair of doubles representing an interval. In each calculation ensure that the result of the upper double is rounded up, and the lower double is rounded down (and when dividing/subtracting, ensure you don't flip the interval, as division/subtraction is monotonically decreasing; same for multiplying by negatives). If you divide an interval containing 0 by an interval containing 0, ensure both values become NaN, and if dividing by an interval containing 0 the result is +inf to -inf.
There are a few problems with interval mathematics; it is expensive (significantly more than 2x), requires replacing every primitive operation with ones that control rounding up/down, and you will find that the interval ends up being shockingly large after a non-trivial amount of math.
Traditional IEEE rounding produces values that are usually closer to what you want because it in effect randomly rounds each iteration. If you flip a coin 10,000 times, you'll get a random value with mean 5000 and variance 2500 and standard deviation 50; so 97% of the time you'll be in 4900 to 5100, 99%+ in 4850 to 5150. Each rounding operation is in effect a coin adding + or - eplsion/2 (actually better; it adds a value between +/- epsilon/2 chosen as the smallest one), so after 10k ops the likely error is less than +/- 75*epsilon. Interval math, meanwhile, will state that the certain error is +/-10000 epsilon.
TL;DR
You need a bigger epsilon.