0

Suppose x is a bitmask value, and b is one flag, e.g.

x = 0b10101101
b = 0b00000100

There seems to be two ways to check whether the bit indicated by b is turned on in x:

if (x & b != 0)    // (1)
if (x & b == b)    // (2)

In most circumstances it seems these two checks always yield the same result, given that b is always a binary with only one bit turned on.

However I wonder is there any exception that makes one method better than another?

franklsf95
  • 1,182
  • 12
  • 23

1 Answers1

1

In general, if we interpret both values as bit sets, the first condition checks if the intersection of x and b is not empty (or, to put it differently: if b and x have elements in common), while the second one checks if b is a subset of x.

Clearly, if b is a singleton, b is a subset of x if and only if the intersection is not empty.

So, whenever you cannot guarantee to 100% that b is a singleton, choose your condition wisely. Ask yourself if you want to express that all elements of b must also be elements of x, or that there are elements of b that are also elements of x. It's a huge difference except for the single bit case.

Ingo
  • 36,037
  • 5
  • 53
  • 100