Compilers will generally tend to pack bitfields together in a single word, thus reducing the overall size of your struct. This packing is at the expense of slower access to the bitfield members. For example:
struct Bitfields
{
unsigned int eight_bit : 8;
unsigned int sixteen_bit : 16;
unsigned int eight_bit_2 : 8;
};
Might be packed as
0 8 24
-----------------------------------------------------
| eight_bit | sixteen_bit | eight_bit_2 |
-----------------------------------------------------
Each time you access sixteen_bit
it incurs a shift and bitwise & operation.
On the other hand, if you do
struct NonBitfields
{
uint8_t eight_bit;
uint16_t sixteen_bit;
uint8_t eight_bit_2;
};
then the compiler generally aligns the members at word boundaries and lays it out as something like:
0 8 16 24
-----------------------------------------------------
| eight_bit | | sixteen_bit |
-----------------------------------------------------
| eight_bit_2| |
-----------------------------------------------------
This wastes more space compared to bitfields, but the members can be accessed faster without bit-shifting and masking.
Here are some other differences:
- You can't apply
sizeof
to a bitfield member.
- You can't pass a bitfield member by reference.
In terms of portability, both options should work on any standards-compliant compiler. If you mean binary portability between different platforms when writing the struct out to a file or socket, then all bets are off for either case.
In terms of preference, I would opt for using uint16_t
instead of bitfields, unless there a good reason for packing the fields together to save space. If I have many bool
s inside a struct, I'll generally use bitfields for compressing those boolean flags together in the same word.