I need to do integer division in JavaScript, which only gives me double-precision floating point numbers to work with. Normally I would just do Math.floor(a / b)
(or a / b | 0
) and be done with it, but in this case I'm doing simulation executed in lockstep and need to ensure consistency across machines and runtimes regardless of whether they use 64-bit or 80-bit internal precision.
I haven't noticed any inconsistencies so far, but I haven't been able to conclusively convince myself that they can't happen. So I'm left wondering:
Assuming
a
andb
are integers in range 0..2^31-1 and 1..2^31-1 respectively, are results from JavaScriptMath.floor(a / b)
(anda / b | 0
) guaranteed to be consistent across machines and runtimes?Why or why not?