I have some code here:
#include <stdio.h>
int main ()
{
char foo = 0xE7;
if (foo == 0xE7)
printf ("true\n");
else
printf ("false\n");
return 0;
}
That prints "false". I'm not too concerned about that because I can believe that foo contains 0xE7, which is now considered signed (-25) and compares false to 0xE7 - which is 231 in decimal.
But what about this?
#include <stdio.h>
int main ()
{
char c = 0xE7;
if (c == 0xFFFFFFE7)
printf ("true\n");
else
printf ("false\n");
return 0;
}
That prints "true".
According to the C++ standard:
A hexadecimal integer literal (base sixteen) begins with 0x or 0X and consists of a sequence of hexadecimal digits, which include the decimal digits and the letters a through f and A through F with decimal values ten through fifteen.
And:
The type of an integer literal is the first of the corresponding list in Table 6 in which its value can be represented.
The first item on the list is "int". And since the comparison is true it would appear that 0xFFFFFFE7 == -25.
However, 0xFFFFFFE7 is another way of writing 4294967271. So let's try:
#include <stdio.h>
int main ()
{
char c = 0xE7;
if (c == 4294967271)
printf ("true\n");
else
printf ("false\n");
return 0;
}
That prints "false". So, 0xFFFFFFE7 is not the same as 4294967271.
Going back to the Standard, what do the words "in which its value can be represented" really mean? Clearly you can stuff 0xFFFFFFE7 into a 4-byte signed int, but that is not really "representing the value" 4294967271.
However:
if (0xFFFFFFE7 == 4294967271) // --> prints "true"
Also:
if (-25 == 0xFFFFFFE7) // --> prints "true"
So it appears that "its value can be represented" means "at the binary level", not "treating the hex constant as its equivalent decimal number". Does this sound right?
Tested on gcc 4.8.2, Ubuntu 14.04, 64-bit processor.