59

I have recently discovered existence of standard fastest type, mainly int_fast32_t and int_fast64_t.

I was always told that, for normal use on mainstream architecture, one should better use classical int & long which should always fit to the processor default reading capacity and so avoid useless numeric conversions.

In the C99 Standard, it says in §7.18.1.3p2 :

"The typedef name int_fastN_t designates the fastest signed integer type with a width of at least N. The typedef name uint_fastN_t designates the fastest unsigned integer type with a width of at least N."

And there is also a quote about it in §7.18.1.3p1 :

"The designated type is not guaranteed to be fastest for all purposes; if the implementation has no clear grounds for choosing one type over another, it will simply pick some integer type satisfying the signedness and width requirements."

It's unclear to me what fastest really means. I do not understand when I should use this type and when I should not.

I have googled a little on this and found that some open source projects have changed some of their functions to it, but not all of them. They didn't really explain why they have changed a part, and only a part, of their code to it.

Do you know what are the specific cases/usages when int_fastXX_t are really faster than the classical ones ?

Coren
  • 5,517
  • 1
  • 21
  • 34
  • 1
    +1. I've been wondering about this for quite some time, and the [C rationale](http://www.open-std.org/jtc1/sc22/wg14/www/C99RationaleV5.10.pdf) is quiet on the topic. – Fred Foo Feb 11 '12 at 11:18

4 Answers4

27

In the C99 Standard, 7.18.1.3 Fastest minimum-width integer types.

(7.18.1.3p1) "Each of the following types designates an integer type that is usually fastest225) to operate with among all integer types that have at least the specified width."

225) "The designated type is not guaranteed to be fastest for all purposes; if the implementation has no clear grounds for choosing one type over another, it will simply pick some integer type satisfying the signedness and width requirements."

and

(7.18.1.3p2) "The typedef name int_fastN_t designates the fastest signed integer type with a width of at least N. The typedef name uint_fastN_t designates the fastest unsigned integer type with a width of at least N."

The types int_fastN_t and uint_fastN_t are counterparts to the exact-width integer types intN_t and uintN_t. The implementation guarantees that they take at least N bits, but the implementation can take more bits if it can perform optimization using larger types; it just guarantees they take at least N bits.

For example, on a 32-bit machine, uint_fast16_t could be defined as an unsigned int rather than as an unsigned short because working with types of machine word size would be more efficent.

Another reason of their existence is the exact-width integer types are optional in C but the fastest minimum-width integer types and the minimum-width integer types (int_leastN_t and uint_leastN_t) are required.

ouah
  • 142,963
  • 15
  • 272
  • 331
  • 2
    This doesn't explain much. "on a 32-bit machine, `uint_fast16_t` could be defined as an `unsigned int`" -- yes, but you could use plain old `unsigned int` directly, since it's the native integer width and the standard guarantees it is at least 16 bits wide. Similarly, `long` meets about the same constraints as `int_fast32_t`. – Fred Foo Feb 11 '12 at 11:15
  • 1
    I have read the rationale, but this does not say _when_ it's faster, for what kind of specific usage ? If this is _really_ faster everytime, why we do not use them by default ? – Coren Feb 11 '12 at 11:23
  • 2
    @larsmans `uint_fast16_t` could be an alias for `unsigned int` in a 32-bit machine and for `unsigned long` in a 64-bit machine. Using `unsigned int` instead of `uint_fast16_t` in your program will not be the same if you to intend to compile your program in different machines. – ouah Feb 11 '12 at 11:24
  • @Coren this is a judgement up to the implementation, as the type could be fastest for one usage and not for another usage. – ouah Feb 11 '12 at 11:27
  • 1
    @ouah that's my main question : when should I use it ? do you have an example usage for a main architecture ? – Coren Feb 11 '12 at 11:34
  • 1
    @Coren I already give an example, the problem you mentioned it, is the definition of *fast*. The implementation may choose to privilege fast arithmetic operations over fast array access or the opposite. This is why I don't personally use these types, I prefer to choose an exact-with integer type that fits my needs. – ouah Feb 11 '12 at 11:43
5

Gnu libc defines {int,uint}_fast{16,32}_t as 64-bit when compiling for 64-bit CPUs and 32-bit otherwise. Operations on 64-bit integers are faster on Intel and AMD 64-bit x86 CPUs than the same operations on 32-bit integers.

Ben
  • 9,184
  • 1
  • 43
  • 56
Pr0methean
  • 303
  • 4
  • 14
  • Yes, but shouldn't `int` be 64-bit on such a machine? – potrzebie Oct 13 '14 at 10:03
  • No, `int` can be as small as 16 bits, and its size usually depends on the compiler but not on the platform. This is probably an artifact of the 16- to 32-bit transition. – Pr0methean Nov 03 '14 at 08:50
  • Your link for "faster" is comparing 32-bit and 64-bit CPU modes for perfoming some complex task, it is not comparing the performance of 32-bit and 64-bit integers under the same CPU mode. – plugwash Feb 03 '16 at 19:54
3

There will probably not be a difference except on exotic hardware where int32_t and int16_t don't even exist.

In that case you might use int_least16_t to get the smallest type that can contain 16 bits. Could be important if you want to conserve space.

On the other hand, using int_fast16_t might get you another type, larger than int_least16_t but possibly faster for "typical" integer use. The implementation will have to consider what is faster and what is typical. Perhaps this is obvious for some special purpose hardware?

On most common machines these 16-bit types will all be a typedef for short, and you don't have to bother.

Bo Persson
  • 90,663
  • 31
  • 146
  • 203
0

IMO they are pretty pointless.

The compiler doesn't care what you call a type, only what size it is and what rules apply to it. so if int, in32_t and int_fast32_t are all 32-bits on your platform they will almost certainly all perform the same.

The theory is that implementers of the language should chose based on what is fastest on their hardware but the standard writers never pinned down a clear definition of fastest. Add that to the fact that platform maintainers are reluctant to change the definition of such types (because it would be an ABI break) and the definitions end up arbitrarily picked at the start of a platforms life (or inherited from other platforms the C library was ported from) and never touched again.

If you are at the level of micro-optimisation that you think variable size may make a significant difference then benchmark the different options with your code on your processor. Otherwise don't worry about it. The "fast" types don't add anything useful IMO.

cincodenada
  • 2,877
  • 25
  • 35
plugwash
  • 9,724
  • 2
  • 38
  • 51
  • 1
    If the Standard had allowed `int_fastN_t` types to have non-deterministic upper limits, that would have allowed some useful optimizations that would be unavailable otherwise. On some platforms, operations on an `int32_t` that is stored in a register will be faster than operations on an `int16_t` stored likewise, but operations on an an `int16_t[]` will often be faster than operations on an `int32_t`. Letting `int_fast16_t` behave non-deterministically as 16 or 32 bits would let compilers achieve faster behavior in both cases. – supercat Oct 05 '18 at 19:52