The C standard specifies that integer operands smaller than int
will be promoted to int
before any arithmetic operations are performed upon them. As a consequence, operations upon two unsigned values which are smaller than int
will be performed with signed rather than unsigned math. In cases where it is important to ensure that operation on 32-bit operands will be performed using unsigned math (e.g. multiplying two numbers whose product could exceed 2⁶³) will use of the type uint_fast32_t
be guaranteed by any standard to yield unsigned semantics without any Undefined Behavior? If not, is there any other unsigned type which is guaranteed to be at least 32 bits and at least as large as int
?

- 77,689
- 9
- 166
- 211
-
According to [this documentation](http://en.cppreference.com/w/cpp/types/integer), `uint_fast32_t` is an "unsigned integer type with width of exactly 32 bits". I think your only worry is the case in which `int` has more than 32 bits, which seems rather unlikely (according to that page, the ILP64 data model "only appeared in some early 64-bit Unix systems"). You could use `uint_fast64_t` if you're _really_ worried, though. – cf- Mar 14 '14 at 03:39
-
Beyond that `uint_xxx` isn't actually a primitive, it's an implementation specific typedef. You should be able to easily look at the header where it is defined and derive what the actual backing data structure is. For example on a machine with 16bit `int` it would likely be `unsigned long` – aruisdante Mar 14 '14 at 03:42
-
@aruisdante: Such types may or may not map to a primitives which are unique to them, though unfortunately the way the standard is written a compiler's ability to take advantage of unique primitives for them would be limited [e.g. if the allowing a type like `uint_fast8_t` to behave as a `uint32_t` which magically fit in a single byte, but whose upper bits could be cleared at arbitrary times, would improve efficiency on machines where loads and stores were the only 8-bit operations]. – supercat Mar 14 '14 at 17:39
-
@computerfreaker You are confusing the documentation of `uint_fast32_t` with that of `uint32_t`. See C11 7.20.1.3 “Fastest minimum-width integer types” (and I know that the question is tagged C99, but I resolved to switch to C11 on January 1, 2014. Which is still more appropriate than C++ documentation). – Pascal Cuoq Mar 18 '14 at 20:59
2 Answers
No, it's not. In any case, I would advise against using the [u]int_fastN_t
types at all. On real-world systems they're misdefined; for example, uint_fast32_t
is usually defined as a 64-bit type on x86_64, despite 64-bit operations being at best (addition, subtraction, logical ops) identical speed to 32-bit ones and at worst much slower (division, and loads/stores since you use twice as many cache lines).

- 208,859
- 35
- 376
- 711
-
Wow, I wonder what e.g. glibc's rationale for this decision is. They're fine (i.e. have their minimal amount of bits, except `short`) on e.g. MSVS 2013. – rubenvb Mar 14 '14 at 15:44
-
in mingw64 `int_fast64_t` is defined as `long long`, the remaining types are also defined exactly their normal sizes (`int_fast8_t` as `signed char`, `int_fast16_t` as `short` and `int_fast32_t` as `int`). Actually it's hard to defined what is "fast" since most programs don't need divisions and they need fast additions, subtractions... but some need fast divisions and no specific type can fit all. Moreover using 64-bit operations in x86_64 require the REX prefix to be used, which increases code size – phuclv Sep 08 '14 at 10:34
-
@LưuVĩnhPhúc: If there are conflicting goals, it's hard to define. But in reality for the example I gave, 32-bit is *at least as fast* for addition, subtraction, etc. as 64-bit, and *significantly faster* for division. And there's also the memory/cache performance issue. There is simply no conceivable way that a 64-bit type could be the right definition for "fast" in this case. – R.. GitHub STOP HELPING ICE Sep 08 '14 at 12:51
The C standard only requires int
to be at least 16 bits and places no upper bound on its width, so uint_fast32_t
could be narrower than int
, or the same width, or wider.
For example, a conforming implementation could make int
64 bits and uint_fast32_t
a typedef for a 32-bit unsigned short
. Or, conversely, int
could be 16 bits and uint_fast32_t
, as the name implies, must be at least 32 bits.
One interesting consequence is that this:
uint_fast32_t x = UINT_FAST32_MAX;
uint_fast32_t y = UINT_FAST32_MAX;
x * y;
could overflow, resulting in undefined behavior. For example, if short
is 32 bits and int
is 64 bits, then uint_fast32_t
could be a typedef for unsigned short
, which would promote to signed int before being multiplied; the result, which is nearly 264, is too big to be represented as an int
.
POSIX requires int
and unsigned int
to be at least 32 bits, but the answer to your question doesn't change even for POSIX-compliant implementations. uint_fast32_t
and int
could still be either 32 and 64 bits respectively, or 64 and 32 bits. (The latter would imply that a 64-bit type is faster than int
, which is odd given that int
is supposed to have the "natural size suggested by the architecture", but it's permitted.)
In practice, most compiler implementers will tend to try to cover 8, 16, 32, and 64-bit integers with the predefined types, which is possible only of int
is no wider than 32 bits. The only compilers I've seen that don't follow this were for Cray vector machines. (Extended integer types could work around this, but I haven't seen a compiler that takes advantage of that.)
If not, is there any other unsigned type which is guaranteed to be at least 32 bits and at least as large as
int
?
Yes, unsigned long
(and unsigned long long
which is at least 64 bits.)

- 254,901
- 44
- 429
- 631
-
I think your first sentence misreads the logic of my statement; "no smaller than" means "at least as large as", and an `int_fast32_t` is clearly "at least as large as" a 16-bit `int` (or even a 32-bit one for that matter). With regard to my last sentence, I meant "other than unsigned long or unsigned long long", both of which could likely cause an unnecessary level of promotion. Perhaps the safest thing to do would be use some conditional compilation to define an `unsigned32` type which will either be `uint32_t` or `unsigned` based upon the value of `UINT_MAX`? While it would be nicer... – supercat Mar 14 '14 at 15:20
-
...to have a "standard" name for the proper casting type, enough information would be available to the preprocessor to determine what the proper type should be since on any given implementation it should always be either `uint32_t` or `unsigned`. Is integer promotion exclusive to `int`, with nothing comparable happening to sizes between `int` and `long`? – supercat Mar 14 '14 at 15:22
-
@supercat: You're right, I misread the question. I've updated it, trying to cover a bit more territory than you asked about. You might consider editing your question to say "at least as wide as". Integer promotion promotes types narrower than `int` to `int` or to `unsigned int`; there is no promotion beyond that -- though the *usual arithmetic conversions*, invoked when an operator has operands of different types, can promote wide types to wider types. – Keith Thompson Mar 14 '14 at 15:42
-
I would like to an effort to develop a language where all valid C code would either behave identically, or would refuse to compile but could be easily modified to yield identical behavior in C or the new language [avoiding the biggest portability problem--things that compile but behave differently]. Any dialect of C which strives for perfect compatibility with existing code is doomed to have horrible semantics in many places where a language with the described compatibility goal could be much cleaner; given that "compatibility with existing code" can itself be a rather nebulous concept... – supercat Mar 14 '14 at 15:52
-
...if e.g. one wants to merge into a common project code which was developed by people using different compilers, allowing programmers to specify the semantics of their variables would be helpful. For example, if code needs a variable to be stored as four 8-bit bytes big-endian, the variable declaration to specify that and require that if the address of the variable is taken the compiler must store it in that format. For machines where that's the native format, the compiler could generate simple code. For machines where it's not, the compiler would generate code that would be more complex... – supercat Mar 14 '14 at 16:02
-
...but might still be better than anything that could be expressed in C [e.g. the processor might have to turn `foo++` into `mov eax,[foo] / bswap eax / add eax,1 / bswap eax / mov [foo],eax`, but could turn something like `foo ^= 0x12345678;` into `xor [foo],78563412h`. In code which assumes `uint32_t` will behave as 4x8-bit big-endian, replacing `uint32_t` on such machines with the more-specifically-defined type wouldn't change behavior, but would allow the code to port directly to machines with different architectures. – supercat Mar 14 '14 at 16:07