The Standard allows one to choose between an integer type, an enum
, and a std::bitset
.
Why would a library implementor use one over the other given these choices?
Case in point, llvm's libcxx appears to use a combination of (at least) two of these implementation options:
ctype_base::mask
is implemented using an integer type:
<__locale>
regex_constants::syntax_option_type
is implemented using an enum
+ overloaded operators:
<regex>
The gcc project's libstdc++ uses all three:
ios_base::fmtflags
is implemented using an enum + overloaded operators: <bits/ios_base.h>
regex_constants::syntax_option_type
is implemented using an integer type,
regex_constants::match_flag_type
is implemented using a std::bitset
Both: <bits/regex_constants.h>
AFAIK, gdb cannot "detect" the bitfieldness of any of these three choices so there would not be a difference wrt enhanced debugging.
The enum
solution and integer type solution should always use the same space. std::bitset
does not seem to make the guarantee that sizeof(std::bitset<32>) == std::uint32_t
so I don't see what is particularly appealing about std::bitset
.
The enum
solution seems slightly less type safe because the combinations of the masks does not generate an enumerator.
Strictly speaking, the aforementioned is with respect to n3376 and not FDIS (as I do not have access to FDIS).
Any available enlightenment in this area would be appreciated.