Lets take char and unsigned char, the range for a signed char is -128 to 127 and unsigned char is 0 to 255, but in fact their hexadecimal are in range of 0x00 to 0xff.
This statement is confusing and misleading. 0xFF
is just another way to write 255. You could just as well have said 'In hexadecimal the range for a signed char is -0x80
to 0x7F
and for a signed char is 0x00
to 0xFF
.'
My question is now if a char and unsigned char are stored in memory using 8 bits binary number, how does the computer itself know whether it is signed or unsigned?
The computer doesn't know. You tell it whether you want to interpret that memory as a signed number or unsigned number by typing the word unsigned
.
In the example above, how does printf know the value of 0xff is signed or unsigned?
Leave printf
out of it. Let's make a simpler example:
char a = 128;
What happens? 128 is larger than the largest possible signed char (again, assuming 8 bit chars in twos complement). So the value wraps around to the smallest possible value; this becomes -128.
char a = 129;
What happens? 129 is larger than the largest possible signed char by two. So it wraps around to the second smallest possible value, -127.
char a = 130;
This is three larger than the largest possible value, so it wraps around to the third smallest possible value, -126.
.... skip a few ...
char a = 255;
this is 128 larger than the largest possible value, so it wraps around to the 128th smallest possible value, which is -1.
Got it?
OK, now that we understand that:
char a = 255;
unsigned char b = 255;
Now what happens when we say
int c = a;
int d = b;
? We have a signed integer. a
we have already determined has wrapped around to -1, which is in the range of an integer, so c
becomes the integer -1. b
is the unsigned char 255, which is in the range of an integer, so d
becomes the integer 255
.
The fact that the in-memory contents of a
and b
are the same is irrelevant. That memory is interpreted as a number based on the type that you assigned to a
and b
. In particular, the conversion of that bit pattern to an integer bit pattern is entirely dependent on the type.