First, integral types are required to be represented using a pure binary system, and so far the tutorial is correct.
Second, a short
is required to be at least 16 bits. If it's more, then you won't see the effect that you did, or any effect. It's unclear from your description whether the tutorial blindly assumes that short
is necessarily 16 bits (wrong), or whether it's just using some concrete example, with the understanding that it depends on compiler etc.
Third, the conversion to signed type … is formally Implementation Defined Behavior if the value cannot be represented. This means that you are not guaranteed a change of value. Instead you can, in principle, get any effect, such as a crash.
[example of other behavior lacking because I'm unable to cajole g++ 4.8.2 into trapping for your example code, even with -ftrapv
]
… yields a value that's either the same, if it can be represented, or otherwise defined by the implementation.
That said, C++ guarantees that unsigned arithmetic is performed modulo 2n, where n is the number of value representation bits, e.g. 16 in your example. And with the very common two's complement form representation of signed integers, a negative integer value -x is represented as the bitpattern for -x + 2n. So if you start with the latter value (the interpretation of the bitpattern as unsigned) as 50 000, with 16 value bits and two's complement form, you get the signed value 50 000 - 216 = 50 000 - 65 536 = -15 536