It seems there are two things going on that need to be understood:
printf()
conversion specifiers
- Integral conversions
And two more things that help make sense of your output:
- Argument promotion for variadic function arguments
- Two's complement representation
First, printf()
is a variadic function. It doesn't know what the types of its arguments are (other than the format string), so you have to use conversion specifiers to tell it how to interpret the arguments. These arguments are subject to the "default argument promotions" such that your 3-bit bit fields are being promoted to int
s.
You are using conversions specifiers (%d
, %u
, and %d
) that do not match the signedness of your data, so you will get undefined behavior that depends on how your data is actually represented in memory.
Second, the C11 standard states:
6.3.1.3 Signed and unsigned integers
When a value with integer type is converted to another integer type other than _Bool, if the value can be represented by the new type, it is unchanged.
Otherwise, if the new type is unsigned, the value is converted by repeatedly adding or subtracting one more than the maximum value that can be represented in the new type until the value is in the range of the new type.
Otherwise, the new type is signed and the value cannot be represented in it; either the result is implementation-defined or an implementation-defined signal is raised.
(As far as I can tell, the details relevant here have been true at least since C89.)
This tells us a couple things about your code:
When you assign -1
to an unsigned int
, UINT_MAX + 1
is added to it, giving UINT_MAX
, or 4294967295
for 32-bit integers.
When you try to assign 5
to a 3-bit signed bit field, the result is implementation-defined.
So you've got both undefined and implementation-defined behavior, but we can still try to make sense of your output, just for fun. I'm assuming 32-bit integers and two's complement representation.
Your system represents the 4294967295
stored in x
as 11111111 11111111 11111111 11111111
. When you told printf()
that the argument you were passing it was signed, those same bits are interpreted as -1
, which is the output you got.
For s.c
, the implementation-defined behaviour you seem to have gotten is straightforward:
the three bits 101
representing 5
got stored as-is. That means that with the correct conversion specifier, printf()
should show s.c
as -3
.
Here are the values you've assigned:
s.i = 101
s.c = 101
x = 11111111 11111111 11111111 11111111
The 3-bit values are promoted to 32-bit by left-padding with 0
for the unsigned value and repeating the sign for the signed value:
s.i = 00000000 00000000 00000000 00000101
s.c = 11111111 11111111 11111111 11111101
x = 11111111 11111111 11111111 11111111
Which, when interpreted as signed, unsigned, and signed integers gives:
s.i=5
s.c=4294967293
x=-1
The x=-1
suggests to me that you are in fact using a two's complement representation (which was a pretty safe bet, anyway), and the output for s.c
suggests that your int
s are 32 bits wide.