0x7FFFFFFF
does require 32 bits. It could be expressed as an unsigned integer in only 31 bits:
111 1111 1111 1111 1111 1111 1111 1111
but if we interpret that as a signed integer using two's complement, then the leading 1
would indicate that it's negative. So we have to prepend a leading 0
:
0 111 1111 1111 1111 1111 1111 1111 1111
which then makes it 32 bits.
As for what you need to change — your current program actually has undefined behavior. If 0x7FFFFFFF
(231-1) is the maximum allowed integer value, then 0x7FFFFFFF + 1
cannot be computed. It is likely to result in -232, but there's absolutely no guarantee: the standard allow compilers to do absolutely anything in this case, and real-world compilers do in fact perform optimizations that can happen to give shocking results when you violate this requirement. Similarly, there's no specific guarantee what ... >> 1
will mean if ...
is negative, though in this case compilers are required, at least, to choose a specific behavior and document it. (Most compilers choose to produce another negative number by copying the leftmost 1
bit, but there's no guarantee of that.)
So really the only sure fix is either:
- to rewrite your code as a whole, using an algorithm that doesn't have these problems; or
- to specifically check for the case that
x
is 0x7FFFFFFF
(returning a hardcoded 32
) and the case that x
is negative (replacing it with ~x
, i.e. -(x+1)
, and proceeding as usual).