0

In Visual Studio 14 the stdint.h header has definitions for fixed width integer types, but if you actually look at there definitions they just delegate back to primitives. The definitons are as follows:

typedef signed char        int8_t;
typedef short              int16_t;
typedef int                int32_t;
typedef long long          int64_t;
typedef unsigned char      uint8_t;
typedef unsigned short     uint16_t;
typedef unsigned int       uint32_t;
typedef unsigned long long uint64_t;

So is there any reason to use stdint.h if all it does is just fallback to primitives? I also know that Visual Studio does not just replace these definitions at compile time because if you try to print out an int8_t to the console you will get a Unicode character instead of a number because it is really just a signed char.

EDIT

Because people are pointing out that there is nothing else that they would logically define to I think my question needs restating.

Why is it that the header which in the C++ spec states that it will have integers of a fixed length of 8, 16, 32 and 64 bits define these integers as types which by definition can be any size the compiler wants (to put in a way said by someone else in another question The compiler can decide that an int will a 71 bit number stored in a 128 bit memory space where the additional 57 bits are used to store the programmers girlfriends birthday.)?

Community
  • 1
  • 1
vandench
  • 1,973
  • 3
  • 19
  • 28
  • What else would those `typedef`'s delegate back to? And use `` in C++ as `` is *deprecated*. – DeiDei Feb 05 '17 at 21:56
  • @DeiDei `stdint.h` is supposed to contain fixed width integers because the compiler can define the primitives to be literally almost anything. – vandench Feb 05 '17 at 21:58
  • Yes, the compiler defines the primitives to be the appropriate for the platform size and the library is responsible for getting the `typedef`'s in `stdint.h` to be correct. Don't judge much from Visual C++, as it is Windows only and the integer types there don't vary too much. – DeiDei Feb 05 '17 at 22:02
  • @DeiDei OS X also has essentially the same definitions. – vandench Feb 05 '17 at 22:09
  • @DeiDei I chose to use `stdint.h` instead of `cstdint` because `cstdint` just falls back to `stdint.h`. – vandench Feb 05 '17 at 22:18
  • The header generally comes with the compiler and is written properly to map the fixed width types to the appropriately sized types for that particular compiler/version. The "for instance" in your edit makes no sense. – Retired Ninja Feb 06 '17 at 00:00
  • @RetiredNinja my edit does not contain the words "for instance". – vandench Feb 06 '17 at 01:17

4 Answers4

4

Different platforms define the primitives differently. On one platform int might be 16-bit, while on another it's 32-bit. If you strictly depend on a variable having a certain width, you should use the types in stdint.h, which will always be typedef'd correctly to their respective primitives on the current platform.

Emily
  • 1,030
  • 1
  • 12
  • 20
  • I thought it was the compiler which gets to decide what size your numbers are. I know what `stdint.h` is for but it just typedef's back to primitives which can be set by your compiler. – vandench Feb 05 '17 at 22:01
  • On very old systems, many compilers had `int` set to be 8-bit, because `int` is regarded as being the type you use for loops and other generic stuff, so it makes sense to not have it have a width the system can't deal with well. So yes, it's the compiler's decision, and some compilers decide to define different primitives to have different sizes, suiting their target platform. Although, to be fair, nowadays you can pretty much rely on it being the same everywhere. – Emily Feb 05 '17 at 22:06
  • @vandench: Yes, that is true. But standard library implementations are usually distributed with compilers, and the people who write one are familiar with the details of the other (if they are not the same people). The `"stdint.h"` file which is distributed with gcc may look different than the one distributed with Visual Studio. – Benjamin Lindley Feb 05 '17 at 22:34
  • 1
    @Lignum "On very old systems, many compilers had int set to be 8-bit" Provide some evidence for this. I have never encountered a C compiler that had a word size (i.e. sizeof(int) less than 16 bits. Even when writing assembly code for 8-bit processors like the Z80, we typically thought of general-purpose integers as 16-bit words. The 8-bitness of things like the Z80 is the size of the data bus, not the size of what we thing of as ints. –  Feb 05 '17 at 23:00
  • @NeilButterworth Well, that one might not have been accurate, I'm no expert on retrocomputing, but you get the idea. – Emily Feb 05 '17 at 23:54
3

So is there any reason to use stdint.h if all it does is just fallback to primitives?

What else would it do?

All types defined in headers can be traced back to the built-in types.

This header just gives you convenient, standard-defined, guaranteed-consistent aliases.

Lightness Races in Orbit
  • 378,754
  • 76
  • 643
  • 1,055
  • I guess it's worth mentioning that the `typedef`'s in question are *optional* as per standard and are available if the platform supports them. `int_leastN_t` for more portability. – DeiDei Feb 05 '17 at 22:04
1

I understand from both original and restated questions, that there is a misconception about guaranteed width integers (and I say guaranteed because not all types in stdint.h are of fixed width) and the actual problems they solve.

C/C++ define primitives such as int, long int, long long int etc. For simplicity let's focus on the most common of all, i.e. int. What C standard defines, is that int should be at least 16-bit wide. Though, compilers on all widely used x86 platforms, will actually provide you a 32-bit wide integer when you define an int. This happens because x86 processors can directly fetch a 32-bit wide field (word size of 32-bit x86 CPU) from memory, provide it as is to ALU for 32-bit arithmetic and store it back to memory, without having to do any shifts, padding etc. and that's pretty fast. But that's not the case for every compiler/architecture combination. If you work on an embedded device, with for example a very small MIPS processor, you will probably get a 16-bit wide integer from the compiler when you define an int. So, the width of primitives is specified by the compiler depending solely on hardware capabilities of the target platform, with respect to the minimum widths defined by the standard. And yes, on a strange architecture with e.g. a 25-bit ALU, you will probably be given a 25-bit int.

In order for a piece of C/C++ code to be portable among many different compiler/hardware combinations, stdint.h provides typedefs that guarantee you certain width (or minimum width). So, when for example you want to use a 16-bit signed integer (e.g. for saving memory, or mod-counters), you don't have to worry whether you should use an int or short, by simply using int16_t. The developers of the compiler will provide you a properly constructed stdint.h that will typedef the requested fixed-size integer into the actual primitive that implements it. That means, on x86 an int16_t will probably be defined as short, while on a small embedded device you may get an int, with all these mappings maintained by the compiler's developers.

exarsis
  • 26
  • 3
0

In response to the restated question, it's not the compiler making the choice to store a birthday in the upper 57 bits. It was a developer. The compiler cannot use whatever bit depth it wants for an integer type. It will use whatever bit depth the compiler's developers told it to use, and the developers will have selected these bit depths according to the requirements of the C++ standard. Once the compiler has been configured and compiled the bit depth will not change1. That 71 bits is going to be guaranteed until you change compilers or compiler versions.

Writing good code is hard enough without the compiler throwing variables at you. Consider what can happen with variable bit depths. An input that was just fine in Tuesday's build overflows and crashes a jetliner because on Wednesday the compiler performed some some calculations and decided it would never see anything over 17 bits.

1 I suppose you could build a compiler that loaded the various integer sizes out of a configuration file at run time, but I suspect that it would make writing the optimizer hideously complicated.

user4581301
  • 33,082
  • 7
  • 33
  • 54