9

Imagine this situation. int32_t is an extended integer type and it's represented in two's complement (as the standard required int32_t to be represented). This means that INT32_MIN is -2147483648 (0x80000000).

Meanwhile int is a standard integer type and it's represented in one's complement (as the standard allows). This means that INT_MIN is -2147483647.

Now correct me if I'm wrong, but I think both types have the same width, which means, according to 6.3.1.1.1 (emphasis mine):

The rank of any standard integer type shall be greater than the rank of any extended integer type with the same width.

So the rank of int32_t is lower than that of int.

Now 6.3.1.8 (usual arithmetic conversions) says (emphasis mine):

<...> Otherwise, the integer promotions are performed on both operands. Then the following rules are applied to the promoted operands: If both operands have the same type, then no further conversion is needed. Otherwise, if both operands have signed integer types or both have unsigned integer types, the operand with the type of lesser integer conversion rank is converted to the type of the operand with greater rank.

So if understand it correctly, in this code block:

int32_t x = INT32_MIN;
int y = 1;
x + y; // What happens here?

In the expression x + y, x has to be promoted to int, and INT32_MIN is outside of the range of int.

Is this a bug in the standard or am I missing something?

In other words, what does the expression x + y in this context evaluate to, as defined by the standard?

Paul Ogilvie
  • 25,048
  • 4
  • 23
  • 41
MarkWeston
  • 740
  • 4
  • 15
  • 2
    `INT32_MIN` should be equal to -(2^31). It's rather bizarre for it to be odd. – cHao Dec 28 '17 at 17:21
  • for two's complement, `INT32_MIN` is `(-INT32_MAX) - 1` – MarkWeston Dec 28 '17 at 17:24
  • 1
    @cHao: Both `MIN` values are one lower (further from zero) than they should be. One's complement would have odd minimums and maximums, the min would be `-(2^31) + 1`. – ShadowRanger Dec 28 '17 at 17:25
  • @MarkWeston: Correct. And the max is an odd number, `(2^31) - 1`, so the min is `-(2^31 - 1) - 1`, or `-(2^31)`. – ShadowRanger Dec 28 '17 at 17:26
  • @ShadowRanger My bad, corrected it. The point still stands, `int` can't represent `INT32_MIN` – MarkWeston Dec 28 '17 at 17:27
  • @MarkWeston INT32_MAX will have the value int can store. There is no way in one implementation int32_t to be different compliment than int. – 0___________ Dec 28 '17 at 17:34
  • 3
    @PeterJ_01 "There is no way in one implementation `int32_t` to be different complement thant `int`". Please provide a standard quote that explicitly (or implicitly) prohibits it. – MarkWeston Dec 28 '17 at 17:36
  • `extended integer types` are implementation defined. So this hypothetical considerations are pointless as behaviour of those types are defined in the implementation. There is nothing like the size of int as well, – 0___________ Dec 28 '17 at 17:39
  • 1
    @MarkWeston: The standard doesn't specify the size of `int` (other than requiring it at least be able to represent integers from -32767 to 32767). It could be 16 bits, or 32, or 64, or even something like 18 or 36. There's nothing requiring that `int32_t` and `int` be the same underlying type or have the same size. That's one of the biggest reasons the exact-width integer types even exist. (The other reason is to be able to specify two's complement.) – cHao Dec 28 '17 at 17:39
  • @cHao I am well aware of that. However the standard **does** specify the limits (in `` and ``) of integer types. – MarkWeston Dec 28 '17 at 17:43
  • 2
    @MarkWeston: It specifies minimum ranges. It does not specify the actual numbers. `INT_MIN` can be -(2^31) if the implementation chooses to extend the range. Only the exact-width types have their minimums and maximums defined, mostly because those are the only ones where the size and representation are fully specified (and thus the only ones where the limits are etched in stone). – cHao Dec 28 '17 at 17:45
  • 1
    Extended integers are implementation defined and only have to be binary numbers, and signed integers are required to be represented in one’s complement, two’s complement, or sign and magnitude notation. Your assumption in the first sentence is just wrong. I can define in my implementation INT54_BIGENDIAN integer if I need one – 0___________ Dec 28 '17 at 17:46
  • 1
    If you convert signed int type to another which cannot accommodate it - it is the UB. – 0___________ Dec 28 '17 at 17:48
  • @PeterJ_01 If `int32_t` is `typedef`ed to that extended type it has to be a two's complement 32-bit integer (in other words, all the values that type can be capable of representing would be in range [-2147483648;2147483647]). – MarkWeston Dec 28 '17 at 17:49
  • 1
    @PeterJ_01 *Extended integers are implementation defined and only have to be binary numbers,* No. The `int32_t` in this question is a *fixed-width integer type*. Per 7.20.1.1: "The typedef name int*N*_t designates a signed integer type with width `N`, no padding bits, and a two’s complement representation. " – Andrew Henle Dec 28 '17 at 17:50
  • int32_t in this question is a fixed-width integer type - so they have to have the same representation as ints. Because this fixed width integers are defined as twos complements ints have to be twos complement as well. Otherwise the particular implementation cannot define the fixed int integers and fixed width integer values have to be implemented as extended integer types – 0___________ Dec 28 '17 at 20:02
  • "Is this a bug in the standard" It is a bug in the premise " int is 32-bit one's complement standard integer type". Let `int` be a 64-bit one's complement standard integer type and give your hypothetical machine wings to fly. – chux - Reinstate Monica Dec 28 '17 at 22:16
  • `int32_t` can't be an extended integer type when `long long` must be have at least 64 bits – phuclv Dec 31 '17 at 14:20

2 Answers2

12

int32_t is optional. A conforming implementation cannot have 32-bit one's complement int and 32-bit two's complement extended integer type int32_t; if int is one's complement, int32_t would most likely not be provided.

Here's one reason 32-bit one's complement int and 32-bit two's complement extended integer type int32_t can't coexist. Quoting the N1570 draft:

7.20.2 Limits of specified-width integer types

1 The following object-like macros specify the minimum and maximum limits of the types declared in <stdint.h>. Each macro name corresponds to a similar type name in 7.20.1.

2 Each instance of any defined macro shall be replaced by a constant expression suitable for use in #if preprocessing directives, and this expression shall have the same type as would an expression that is an object of the corresponding type converted according to the integer promotions. Its implementation-defined value shall be equal to or greater in magnitude (absolute value) than the corresponding value given below, with the same sign, except where stated to be exactly the given value.

...

INTN_MIN                                  exactly -(2N-1)

In the situation you describe, INT32_MIN must have value exactly -2^31, but due to the integer promotions, it must have a type that cannot hold that value. This contradiction prevents providing int32_t at all.

user2357112
  • 260,549
  • 28
  • 431
  • 505
-5

Meanwhile int is a standard integer type and it's represented in one's complement

Extended integers are implementation defined and only have to be binary numbers, and signed integers are required to be represented in one’s complement, two’s complement, or sign and magnitude notation

intxx_t is a fixed size int type it has to have the same representation as int. Because intxx_t is two's complement it requires int to be the same.

0___________
  • 60,014
  • 4
  • 34
  • 74
  • The machines are irrelevant. It's the language specification that matters. – MarkWeston Dec 28 '17 at 17:23
  • And the specification does not require one's complement. In fact, it doesn't require any particular representation. so most computers these days use two's complement. – cHao Dec 28 '17 at 17:24
  • 3
    @cHao. It **allows** `int` to be represented in one's complement. So `int` **may be** represented in one's complement. – MarkWeston Dec 28 '17 at 17:25
  • It allows the _implementation_ to use one's complement, but `int` is supposed to be the most efficient integer type. So if the machine uses two's complement natively, that's what the implementation is generally going to use as well. – cHao Dec 28 '17 at 17:29
  • @MarkWeston INT32_MAX is not the virtual one - is strictly connected with the int representation. – 0___________ Dec 28 '17 at 17:30
  • 3
    What is the basis for your assertion that `intxx_t` has to have the same representation as `int`? More generally, do you claim that all integer types have to use the same representation (2's-complement, 1's-complement, sign-and-magnitude)? I'm fairly sure there's no such requirement, and that a conforming implementation could make `int` 1's-complement and `int32_t` a typedef for a 2's-complement extended integer type. Prove me wrong by citing [N1570](http://www.open-std.org/jtc1/sc22/wg14/www/docs/n1570.pdf). (Nobody is claiming that this is likely, merely that it's permitted.) – Keith Thompson Dec 28 '17 at 19:55
  • Logic. As I understand the standard all int types have to have the same representation. If intxx_t is two's complement int has to be as well. If int is not two complement implementations cannot define the intxx_t types and OPs considerations are pointless. I do not say that the implementation cannot implement extended int types which may have fixed length. But those types will not have max and min vales defined by standard definitions – 0___________ Dec 28 '17 at 19:59