No, I misunderstood the change; the "change of bounds" did not. In fact, the only change wass that C++14 required bytes to now hold 256 distinct values, which was already within the capabilities of the typical byte data structure before hand.
C++ does not restrict its implementations by requiring a specific binary representation for its integral types, which means that architectures are free to remain conformant to the C++ Standard even if they decide to use a non-standard binary representation for integral types.
From Section 3.9.1 of the C++14 Standard Working Draft, Fundamental Types:
7 Types bool , char , char16_t , char32_t , wchar_t , and the signed
and unsigned integer types are collectively called integral types.
A synonym for integral type is integer type. The representations of
integral types shall define values by use of a pure binary numeration
system.
[Example: this International Standard permits 2’s complement, 1’s
complement and signed magnitude representations for integral types.
—end example ]
If we do calculate the amount of values a single char
value can hold, it can be seen that C++14 increases the negative space of a char by one for a total of 256 unique values (including 0). I would guess that prior versions left the bottom limit for a char
at -128 for compatibility with ones' complement systems.
Edit: The misunderstanding is that ones' complements still can hold 256 values; it's just that two "represent" 0, as +0 and -0. If we take one to mean something else, we can then pretend that we still have 256 assignable values and thus, enough space to fit the char
requirement for C++14.
Related: Why not enforce 2's complement in C++?
C++14 Standard Working Draft: http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2014/n4296.pdf