I'm currently reading a book on C# programming and it has brief intro to Overflow and Underflow and the writer gives a general idea of what happens when you go over the allowed range of a certain type.
Example
short a = 30000;
short b = 30000;
short sum = (short)(a + b); // Explicitly cast back into short
Console.WriteLine(sum); // This would output the value -5536
So the short type only has a range of from -32768 to 32767 and in the book the explanation given by the writer is
"For integer types (byte, short, int, and long), the most significant bits (which overflowed) get dropped. This is especially strange, as the computer then interprets it as wrapping around. This is why we end up with a negative value in our example. You can easily see this happening if you start with the maximum value for a particular type (for example, short.MaxValue
) and adding one to it." The C# Players Guide Second edition, chapter 9, page 58
You'd end up at the minimum (-32768).
I'm having a problem understanding this I get confused when the writer talks about "the computer interprets it as wrapping around"
The way I tried understanding this was that the short type uses 2 bytes (16 bits)
So the number 32767 = 0111111111111111 if i was to +1 to the binary string I'd end up with 32768 = 1000000000000000 (which cannot be represented with the short type as max value is 32767) so the compiler gives -32768. Why does it end up as a negative?
I understand the concept of using twos compliment to represent negative numbers and could someone correct my thinking here or elaborate because i don't fully understand why we only use 15 bits of the 16 bits to represent the positive range and the most significant bit for the negative range