Typically, the &
operator is implemented in hardware circuitry. The circuitry runs over all the bits of the number in parallel in one single step. It does not look at bits one by one.
Integer division and remainder operations on the other hand are a lot more complex. Popular CPUs don't have special circuitry for it, instead it's implemented in microcode. It's hard to find a primary source, but different sources on the internet put integer division at about 20-40 times as expensive as bitwise operators like AND
.
Compiler writers are well aware how expensive the division and remainder instructions are, and do everything possible to avoid generating code that uses these instructions - see for example Why does GCC use multiplication by a strange number in implementing integer division? It is very likely, but not guaranteed, that N%2==0
is compiled to N&1==0
. To make sure, you would have to examine the machine code generated by the compiler. In Java, the compiler you should most care about is the JIT compiler. It generates native machine code on the fly. Examining its output is possible, but not easy. Also, the optimizations it does may change from one version to the next, or vary between CPU models.
If you really want to know which is faster in your application with your hardware, you should create a benchmark and test it yourself.