5

The C99 standard defines the range of data types in the following manner:

— minimum value for an object of type signed char
SCHAR_MIN -127 // −(2^7 − 1)
— maximum value for an object of type signed char
SCHAR_MAX +127 // 2^7 − 1
— maximum value for an object of type unsigned char
UCHAR_MAX 255 // 2^8 − 1
— minimum value for an object of type char
CHAR_MIN see below
— maximum value for an object of type char
CHAR_MAX see below
— maximum number of bytes in a multibyte character, for any supported locale
MB_LEN_MAX 1
— minimum value for an object of type short int
SHRT_MIN -32767 // −(2^15 − 1)
— maximum value for an object of type short int
SHRT_MAX +32767 // 2^15 − 1
— maximum value for an object of type unsigned short int
USHRT_MAX 65535 // 2^16 − 1
— minimum value for an object of type int
INT_MIN -32767 // −(2^15 − 1)
— maximum value for an object of type int
INT_MAX +32767 // 2^15 − 1
— maximum value for an object of type unsigned int
UINT_MAX 65535 // 2^16 − 1
— minimum value for an object of type long int
LONG_MIN -2147483647 // −(2^31 − 1)
— maximum value for an object of type long int
LONG_MAX +2147483647 // 2^31 − 1
— maximum value for an object of type unsigned long int
ULONG_MAX 4294967295 // 2^32 − 1

If we see the negative range, it can be actually one more than what is defined here as per allowable two's compliment representations. Why they are defined like this ?

TheCodeArtist
  • 21,479
  • 4
  • 69
  • 130
bubble
  • 3,408
  • 5
  • 29
  • 51
  • These sizes are defined in the standard? Which section? – Kiril Kirov Oct 18 '12 at 14:55
  • 2
    Two's complement isn't a requirement AFAIK. – Mat Oct 18 '12 at 14:56
  • 2
    It may be to allow for 1s complement as well as 2s complement. – Paul R Oct 18 '12 at 14:56
  • I'm not sure if means standard, but rather the ANSI limits.h – im so confused Oct 18 '12 at 14:57
  • I have copied the ranges from C standard draft. – bubble Oct 18 '12 at 14:58
  • @bubble put the link of your font, please. – logoff Oct 18 '12 at 14:58
  • http://www.open-std.org/jtc1/sc22/wg14/www/docs/n1570.pdf – bubble Oct 18 '12 at 14:59
  • @CashCow Are you sure? The questions are entirely different... – bubble Oct 18 '12 at 15:02
  • I believe all these sizes are implementation dependent? the standard only defines for a minimum size doesn't it? – Jimmy Lu Oct 18 '12 at 15:07
  • Maybe so in arithmetic operations, the value that has just the highest bit set is a flag for overflowing if you are incrementing or decrementing by 1. This value also works well as a pseudo-NaN value for integers. The number itself is probably "well-defined" for bitwise operations but not for arithmetic ones. – CashCow Oct 18 '12 at 15:07
  • @BeyondSora As I read more carefully through the document, these are the minimum values.. – bubble Oct 18 '12 at 15:09
  • 1
    The question [Why does INT_MIN = -INT_MIN in a signed twos complement representation](http://stackoverflow.com/questions/8917233/why-does-int-min-int-min-in-a-signed-twos-complement-representation) is **NOT** a duplicate of this question. The answer there is strictly about 2's complement arithmetic; the answer here is "because there are other systems of binary arithmetic than 2's complement arithmetic:". – Jonathan Leffler Oct 18 '12 at 23:43

2 Answers2

10

If we see the negative range, it can be actually one more than what is defined here as per allowable two's complement representations. Why they are defined like this ?

Because C is also designed for old (and new!) architectures, which don't necessarily use two's complement representation for signed integers. Three representations are indeed allowed by the C11 standard (which of these applies is implementation-defined):

§ 6.2.6.2 Integer types

If the sign bit is one, the value shall be modified in one of the following ways:

— the corresponding value with sign bit 0 is negated (sign and magnitude)
— the sign bit has the value −(2M ) (two’s complement);
— the sign bit has the value −(2M − 1) (ones’ complement).

So, with ones' complement representation, the minimum value is -(2^M - 1). However, there is an exception: the C99 optional types intxx_t, which are guaranted to be stored with two's complement representation (and that's why there are optional: C standard doesn't force this representation).

Community
  • 1
  • 1
md5
  • 23,373
  • 3
  • 44
  • 93
4

Because two's complement is not required. It is possible that C99 could be implemented on an architecture with a sign bit and magnitude or one's complement.

Art
  • 19,807
  • 1
  • 34
  • 60
  • 2
    the C standard only permits two's complement, ones' complement or sign and magnitude as signed integer representation – Christoph Oct 18 '12 at 15:06
  • @Christoph Is that explicitly stated in the standard or only implied by the minimum values in limits.h? I recall last time I dug into this and read the C11 standard like the devil reads the bible we could even find two's complement assumptions (anecdote warning though, I can't remember where and I can't find the discussion I had about it). – Art Oct 18 '12 at 15:14
  • Do you know if any implementation has ever been created for a practical non-two's-complement platform [not counting virtual machines whose primary purpose is to be able to claim that not all C99 implementations use two's-complement representations]? – supercat Dec 28 '16 at 16:29