According to the standard, whether char
is signed or not is implementation-defined. This has caused me some trouble. Following are some examples:
1) Testing the most significant bit. If char
is signed, I could simply compare the value against 0
. If unsigned, I compare the value against 128
instead. Neither of the two simple methods is generic and applies to both cases. In order to write portable code, it seems that I have to manipulate the bits directly, which is not neat.
2) Value assignment. Sometimes, I need to write a bit pattern to a char
value. If char
is unsigned, this can be done easily using hexadecimal notation, e.g., char c = 0xff
. But this method does not apply when char
is signed. Take char c = 0xff
for example. 0xff
is beyond the the maximum value a signed char
can hold. In such cases, the standard says the resulting value of c
is implementation-defined.
So, does anybody have good ideas about the these two issues? With respect to the second one, I'm wondering whether char c = '\xff'
is OK for both signed and unsigned char
.
NOTE: It is sometimes needed to write explicit bit patterns to characters. See the example in http://en.cppreference.com/w/cpp/string/multibyte/mbsrtowcs.