Someone drew my attention to the following program:
#include <stdio.h>
struct X50 {
long long int z:50;
} s50 = { 2 };
struct X10 {
long long int z:10;
} s10 = { 2 };
int main() {
printf("%zu %zu %zu\n",sizeof(long long int),sizeof(s10.z+1),sizeof(s50.z+1));
}
The type of expression sizeof(lv.z+1)
is computed according to the "usual arithmetic conversions", that pretty much say that the size of the type for the lvalue lv.z
will be reflected on the type of the addition, as long as it is at least int
.
I did not expect this type to depend on the size of the bitfield, but it does: both GCC and Clang print 8 4 8
on my computer.
The relevant clauses that I found in the C99 standard are clause 2 in 6.3.1.1, that does not seem so say anything about bitfields not based on _Bool
, int
, signed int
, or unsigned int
. The second part of the clause, "If an int
can represent all values of the original type, the value is converted to an int
, ...", only seems to apply in the conditions described in the first part of the clause, that do not include bitfields based on long long int
.
Besides, 6.7.2.1 says:
A bit-field shall have a type that is a qualified or unqualified version of _Bool, signed int, unsigned int, or some other implementation-defined type.
Is it the case that since long long int
bitfields are outside the scope of the standard, compilers can invent their own rules, or can some kind of justification for Clang and GCC's behaviors be found elsewhere in C99?
I found this question on StackOverflow, which points in the "compilers can invent their own rules" direction, but there could still be a justification that I missed for Clang and GCC both typing S10.z
as int
.