1
#include <stdio.h>

int main(int argc, char const *argv[])
{
    int x = 128;
    char y = x;
    int z = y;
    printf("%d\n", z);
    return 0;
}

i don't understand why this program prints -128. i have tried to convert 128 in binary base but i'm still confused on how the C compiler convert int to char and char to int.

Note : in my machine sizeof(char) = 1 and sizeof(int) = 4

Zeuzif
  • 318
  • 2
  • 14

2 Answers2

1

C standard does not specify whether char is signed or unsigned. In fact, assigning a char type a value outside of the basic execution character set is implementation defined. You can probably use the macros in <limits.h> to verify.

I suspect on your system char is signed, which makes the max value 127. Signed interger overflow is undefined. So no guarantess on the output. In this case, it looks like it wraps around.

1

Assuming a char is signed and 8 bits, its range is -128 to 127 which means the value 128 is out of range for a char. So the value is converted in a implementation-defined manner. In the case of gcc, the result is reduced modulo 28 to become in range.

What this basically means is that the low-order byte of the int value 128 is assigned to the char variable. 128 in hex as 32-bit is 0x00000080, so 0x80 is assigned to y. Assuming 2's compliment representation, this represents the value -128. When this value is then assigned to z, which is an int, that value can be represented in an int, so that's what gets assigned to it, and its representation is 0xffffff80.

dbush
  • 205,898
  • 23
  • 218
  • 273