I'm learning about bit flags and creating bit fields manually using bit-wise operators. I then came across bitsets, seemingly an easier and cleaner way of storing a field of bits. I understand the value of using bit fields as far as minimizing memory usage. After testing the sizeof(bitset), though, I'm having a hard time understanding how this is a better approach.
Consider:
#include <bitset>
#include <iostream>
int main ()
{
// Using Bit Set, Size = 8 Bytes
const unsigned int i1 = 0;
const unsigned int i2 = 1;
std::bitset<8> mySet(0);
mySet.set(i1);
mySet.set(i2);
std::cout << sizeof(mySet) << std::endl;
// Manually managing bit flags
const unsigned char t1 = 1 << 0;
const unsigned char t2 = 1 << 1;
unsigned char bitField = 0;
bitField |= t1 | t2;
std::cout << sizeof(bitField) << std::endl;
return 0;
}
The output is:
The mySet is 8 bytes. The bitField is 1 byte.
Should I not use std::bitset if minimal memory usage is desired?