0

I had a discussion with the team lead who told me that using uintX_t is very problematic and causes performance problems...I can't understand why.... using uint8_t and uint16_t is the same as unsigned char and unsigned short - I don't think that these types use causes performance problems... Exactly as uint64_t is like long...... May be the performance problems can occur with uint128_t etc.. Is it correct or I am missing something...

Upd It is known that unsigned char and unsigned short sizes should not be 8 and 16 in all platforms....Just used classic values.....

YAKOVM
  • 9,805
  • 31
  • 116
  • 217
  • He's wrong; how can they possibly cause performance issues. – trojanfoe Nov 12 '13 at 16:06
  • They could potentially cause performance issues on platforms with weird integer sizes. I really wouldn't worry about that. – harold Nov 12 '13 at 16:08
  • Note: Fixed-size integer types are not guaranteed to be defined on all platforms. If you want to have the fastest type with at least x bits, use the fast-int types. See http://en.cppreference.com/w/cpp/header/cstdint – stefan Nov 12 '13 at 16:12
  • 2
    I don't know the exact answer, but I do know the main reason `short` and `int` are implementation defined is to allow for the most efficient implementation. If you specify a fixed width `int`, the only thing that can serve to do is either allow the performance to stay the same or reduce it. – JustinBlaber Nov 12 '13 at 16:17
  • 1
    Does your team lead have specific examples to show you? Without those any discussion will be vague and probably not useful. – Retired Ninja Nov 12 '13 at 16:57

4 Answers4

4

Guarantee the size is the main purpose.

Those type aren't made for performance issues. They're made to guarantee the integer sizes are the same over various systems.

For example, when you use int32_t you can be sure the size of it is 32 bits anywhere code compiles, but you're not sure about the size of int.

The problem is using these guaranteed size types may affect the performance, int_fastX_t types can reduce this bad effect because they guarantee minimum size.

For example, compiler can use a 32 bit int for a int_fast16_t in a 32 bit machine...

crashmstr
  • 28,043
  • 9
  • 61
  • 79
masoud
  • 55,379
  • 16
  • 141
  • 208
  • Ok...I understand that!But the question still remains - whout could be performance issues using them – YAKOVM Nov 12 '13 at 16:07
  • @Yakov: I've mentioned that. Yes it may cause performance issues, you can use `int_fastX_t` types. – masoud Nov 12 '13 at 16:18
  • 1
    Maybe check out the question [here](http://stackoverflow.com/questions/5069489/performance-of-built-in-types-char-vs-short-vs-int-vs-float-vs-double). According to one of the answers, specifying a fixed width `int` smaller than the native `int` can reduce performance through load/stores. Then, fixed width `int` larger than the native `int` can reduce performance because it's larger than the register size, so it may need to split computation on the upper and lower halves separately. But overall, on x86 apparently it doesn't make a huge difference either way. – JustinBlaber Nov 12 '13 at 16:29
2

It's a "how long is a piece of string"-type question.

Whenever anyone makes claims of this nature that you care about, ask to see the code they've used and the results of their benchmarks. Then you can judge whether the benchmarks apply to your case, and perhaps run them yourself.

In other words, the claim is not worth very much without benchmarks that reflect your environment and your actual use of those types. It could be that your team lead has done thorough profiling of the code base in question; it could also be that he simply "thinks" that uintX_t would be slower. We have no way of knowing which it is.

NPE
  • 486,780
  • 108
  • 951
  • 1,012
2

You may have a performance issue if you are using a uint8_t on a 32 bits processors and you don't need the automatic overflow if the number is bigger than 255.

Why : The compiler may need to apply a mask on the value before processing it / using it.

For example the 8 bits value is stored in a 32 bits register, you need to do a comparison with it, and if your processor does not have an instruction to do the comparison only using the 8 lower bits of the register, so the compiler must apply a mask 0x0000FFFF before doing the comparison

That why you do have the types : int_least8_t or uint_fast8_t... Take a look at this page stdint.h to see all available types

benjarobin
  • 4,410
  • 27
  • 21
0

They are types, and attempt to make certain types cross-platform by guaranteeing their size. There is nothing about them, in particular, that would cause a performance issue. The problem may be how they are used (which would be no different for unsigned int, unsigned short, and unsigned char).

In short, your team lead is wrong, and likely is a direct consequence of the Peter Principle.

Zac Howland
  • 15,777
  • 1
  • 26
  • 42