8

In the book "C Programming Language" by K&R, there is a bit count function:

int bitsCount(unsigned x)
{
    int b;
    for (b = 0; x != 0; x >>= 1)
        if (x & 01)
            b++;
    return b;
}

My question is why they use x & 01 but not x & 1 or x & 00000001? Isn't that 01 means octal 1?

ecjb
  • 5,169
  • 12
  • 43
  • 79
wuyefeibao
  • 237
  • 1
  • 9

1 Answers1

11

Semantically, you're correct, it doesn't matter. x & 01, x & 1, x & 0x1, etc will all do the exact same thing (and in every sane compiler, generate the exact same code). What you're seeing here is an author's convention, once pretty standard (but never universal), now much less so. The use of octal in this case is to make it clear that bitwise operations are taking place; I'd wager that the author defines flag constants (intended to be bitwise-or'd together) in octal as well. This is because it's much easier to reason about, say, 010 & 017, then to reason about 8 & 15, as you can think about it one digit at a time. Today, I find it much more common to use hex, for exactly the same reason (bitwise operations apply one digit at a time). The advantage of hex over octal is that hex digits align nicely to bytes, and I'd expect to see most bitwise operations written with hex constants in modern code (although trivial constants < 10 I tend to write as a single decimal digit; so I'd personally use x & 1 rather than x & 0x1 in this context).

addaon
  • 1,097
  • 9
  • 25
  • 2
    Correct. A coding standard we used in the mid 80s specified that we had to use octal or hex constants to make bit operations more obvious and 01 is easier to type than 0x1. We tended to use hex for values greater than 7. – Dipstick Jan 10 '14 at 18:59
  • 01 doesn't mean octal number. You can use any one of (x & 1), (x & 01), (x & 0x1), it will not make any difference. Whenever you are using any literal, you should use proper suffix e.g. u(for unsigned int), ul(for unsigned long), f(for float), and d(double). This is good coding practice it will help you in avoiding issues caused by automatic type conversion during any arithmatic operation. For example in this case you should use (x & 1u) or (x & 0x1u) – Cool Goose Jan 11 '14 at 07:29
  • 1
    @iGRJ: your opening comment is odd because `01` _does_ mean octal. The rest is mostly accurate enough, though the suffix rule is more optional than you imply (in practice especially). Note that there is no suffix for `double`, especially not `d`. – Jonathan Leffler Nov 19 '17 at 18:04