I'm wondering about this: when I try to assign an integer value to an int
variable (16-bit compiler, 2 bytes for integers) let's say:
int a;
a=40000;
that can't be represented with the range of the type it will be truncated. But what I'm seeing is that the resulting value in a is the bit pattern for -25000 (or some close number) which means that the binary representation that the compiler chooses for decimal 40000 was unsigned integer representation. And that raises my question: how does the compiler choose the type for this literal expression?
I'm guessing it uses the type capable of handling the value with less storage space needed.