At the "bottom" of the compiler, there are only a small number of actual data types - it depends a little on the actual compiler implementation how those are described, but for example LLVM has
i1
which represents bool
i8
which represents char
i16
which represents short
i32
which represents int
(and sometimes long
)
i64
which represents long long
(and sometimes long
)
The LLVM compiler also understands pointers as a separate thing.
Signed and unsigned types are only relevant when performing some operations, so at the base layer, the compiler doesn't differentiate them except for when it does operations where it matters - mainly less than/greater than are affected by signed/unsigned.
Which one do you choose? It really depends. Do you want a guaranteed size always (good for file formats, protocols and API functionality) - if so, use uint32_t or int32_t from with #include <cstdint>
. If you just want something that "is bigger than short", then use int
(assuming we know that we're never going to compile this on a system with 16-bit integers!) - int
is defined as "a type that is natural to the machine, and will be fast", which is not guaranteed to be the case of int32_t
(it IS the same thing on all existing Windows platforms, but it's not certain to be the case in the future and/or on other platforms)
If you want code that is portable, it may make sense to do your own typedef, e.g.
typedef uint32_t count_type;
That way, if you ever need to change that type, you only need to change it in one place.
Windows has its own definitions, largely because there wasn't any strict types defined at the time, so they had to make up their own - and of course, once you have introduced a name for a type, it's going to break a lot of code if you remove that name, just because there is another name that does the same thing - at the same time, the standard requires the new types, so they are also defined.