4

Which types should I use when programming C++ on Linux? Is it good idea to use types from stdint.h, such as int16_t and uint8_t?

On one hand, surely stdint.h won't be available for programming on Windows. On the other though, size of e.g. short isn't clear on the first batch of an eye. And it's even more intuitive to write int8_t instead of char...

Does C++ standard guarantee, that sizes of standard types will be unchanged in future?

Szał Pał
  • 306
  • 1
  • 7
  • 20
  • 1
    The exact size of core types, like `short` and `int`, are not guaranteed to be consistent across platforms. That is why types like `int16_t` and `int32_t` exist at all. Some Windows compilers do have `stdint.h` (or equivalent). Just be sure to use `#include ` (C++) rather than `#include ` (C). – Remy Lebeau Nov 02 '16 at 18:56
  • Most obvious way to rely on stdint.h on Windows is to use GCC of course. MSVC++ had as well at least for the past 3 years. – Hans Passant Nov 02 '16 at 19:01
  • Is it important that your types has an *exact* width? Or is it just an advantage if `int` is 36 bits instead of 32 on [some systems](http://stackoverflow.com/questions/6971886/exotic-architectures-the-standards-committees-care-about), rather than the code not compiling at all? – Bo Persson Nov 02 '16 at 19:03
  • 1
    In fact Microsoft's implementation *does* support `` (at least in recent versions). https://msdn.microsoft.com/en-us/library/323b6b3k.aspx – Keith Thompson Nov 02 '16 at 19:09
  • @BoPersson So if it is important, I should use the exact `int16_t` for example? – Szał Pał Nov 02 '16 at 19:25

2 Answers2

7

First off, Microsoft's implementation does support <stdint.h>.

Use the appropriate type for what you're doing.

If you need, for example, an unsigned type that's exactly 16 bits wide with no padding bits, use uint16_t, defined in <stdint.h>.

If you need an unsigned type that's at least 16 bits wide, you can use uint_least16_t, or uint_fast16_t, or short, or int.

You probably don't need exact-width types as often as you think you do. Very often what matters is not the exact size of a type, but the range of values it supports. But exact representation is important when you're interfacing to some externally defined data format. In that case, you should already have declarations that tell you what types to use.

There are specific requirements on the ranges of the predefined types: char is at least 8 bits, short and int are at least 16 bits, long is at least 32 bits, and long long is at least 64 bits. Also, short is at least as wide as char, int is at least as wide as short, and so forth. (The standard specifies minimum ranges, but the minimum sizes can be derived from the ranges and the fact that a binary representation is required.)

Note that <stdint.h> is a C header. If you #include it in a C++ program, the type names will be imported directly into the global namespace, and may or may not also be imported into the std namespace. If you #include <cstdint>, then the type names will be imported into the std namespace, and may or may not also be imported into the global namespace. Macro names such as UINT32_MAX are not in any namespace; they're always global. You can use either version of the header; just be consistent about using or not using the std:: prefix.

Keith Thompson
  • 254,901
  • 44
  • 429
  • 631
3

C++ standard does not specify much about sizes of integer types (such as int, long or char). If you want to be sure, that certain type has fixed size across platforms, you can use C++11's Fixed-width integer types, which are standardized and guaranteed to have given size.

To use them, #include <cstdint>.

Does C++ standard guarantee, that sizes of standard types will be unchanged in future?

Not likely. On 8bit computers, sizes of integers types were different to what they are today. In the future, in 2042, with 1024-bit computers, I assume long long to be 1024-bit long.

However, we can be almost absolutely sure, that std::uint32_t will stay 32-bit long.

Zereges
  • 5,139
  • 1
  • 25
  • 49
  • 1
    Any conforming C or C++ implementation must make `int` at least 16 bits wide, even if it requires extra CPU instructions to implement operations on it. And don't confuse `int` and "integer". For example, `char` is an integer type that's typically 8 bits wide. – Keith Thompson Nov 02 '16 at 19:17
  • I'd go further and state that there's no sane *or* insane C compiler out there that has 8 bit `int`s (not unless you tweak a fork of one just to prove me wrong). – Kuba hasn't forgotten Monica Nov 02 '16 at 19:19