0

The header file <stdint.h> usually provides typedefs and macro constants for integers of 8, 16, 32 and 64 bit width.

The standard also allows any N-bit type to be specified by using identifiers of the form uintN_t, etc, although I personally have yet to encounter a platform for which any beyond the common four already mentioned are defined.

Curiously, the source code for the Clang/LLVM stdint.h provides conditional support for every N-bit type that is a multiple of 8, from 8 to 64. Code fragment (source):

#ifdef __INT48_TYPE__
typedef __INT48_TYPE__ int48_t;
typedef __UINT48_TYPE__ uint48_t;
typedef int48_t int_least48_t;
typedef uint48_t uint_least48_t;
typedef int48_t int_fast48_t;
typedef uint48_t uint_fast48_t;
# define __int_least32_t int48_t
# define __uint_least32_t uint48_t
# define __int_least16_t int48_t
# define __uint_least16_t uint48_t
# define __int_least8_t int48_t
# define __uint_least8_t uint48_t
#endif /* __INT48_TYPE__ */


#ifdef __INT40_TYPE__
typedef __INT40_TYPE__ int40_t;
typedef __UINT40_TYPE__ uint40_t;
typedef int40_t int_least40_t;
typedef uint40_t uint_least40_t;
typedef int40_t int_fast40_t;
typedef uint40_t uint_fast40_t;
# define __int_least32_t int40_t
# define __uint_least32_t uint40_t
# define __int_least16_t int40_t
# define __uint_least16_t uint40_t
# define __int_least8_t int40_t
# define __uint_least8_t uint40_t
#endif /* __INT40_TYPE__ */

Now why might they have done this? Is this wishful thinking on the part of the Clang developers ("you never know when one day a system with 40-bit native arithmetic might come around!") or are there actually some systems somewhere where built-in support for this stuff is provided?

saxbophone
  • 779
  • 1
  • 6
  • 22
  • 2
    Might be worth checking the history on that file to see if the commit message provides any insight. – Stephen Newell Jul 26 '22 at 16:40
  • @StephenNewell that's a good point, I briefly considered that but assumed the history on Github might've been munged by import or what-not. I will see if I can do some digging, thanks. – saxbophone Jul 26 '22 at 16:41
  • https://www.microchipdeveloper.com/dsp0201:40-bit-dsp-adder – Hans Passant Jul 26 '22 at 16:42
  • 3
    They seem to have missed 36-bit integers on early-ish DEC computers! – Neil Butterworth Jul 26 '22 at 16:42
  • @HansPassant I'm aware of N-bit equipment, systems being available generally. What I'm searching for is any for which the C standard library header has been specialised for, as in an example of use of these types in practice. – saxbophone Jul 26 '22 at 16:44
  • @StephenNewell from the commit that introduced the referenced code, it seems that it was generalised not for any one system in particular, however it looks like the LLVM system is performing some machine introspection such as "how many bits exactly are `short`, `long`, etc..." from which N-bit types may be populated. – saxbophone Jul 26 '22 at 16:48
  • 3
    Audio DSPs are an example. High quality audio uses 24-bit samples, and one of the audio DSPs that I've used had 24-bit registers and a 24-bit memory interface with no byte-addressable capability. `char`, `short` and `int` were all 24 bits. So the only fixed-width types that system could support would be `int24_t` and `uint24_t`. – user3386109 Jul 26 '22 at 16:53
  • The standard says what intX types there can be. Compilers can skip having some of them but they aren't allowed to add their own types and call themself standard conform. What compilers do is non-standard types like __int128 which are implementation defined. – Goswin von Brederlow Jul 26 '22 at 17:59
  • 1
    PIC has 24-bit integers. `__uint24` https://onlinedocs.microchip.com/pr/%20GUID-BB433107-FD4E-4D28-BB58-9D4A58955B1A-en-US-1/index.html?GUID-3A37A613-2364-4965-9519-E79FE101ECC3 – KamilCuk Jul 26 '22 at 18:07
  • @GoswinvonBrederlow The standard specifies that implementations may define `intN_t` etc for any **N** that isn't 8, 16, 32 or 64: https://en.cppreference.com/w/c/types/integer#:~:text=The%20implementation%20may,exactly%2024%20bits. – saxbophone Jul 26 '22 at 18:59
  • 1
    @NeilButterworth 36 bit was also on early IBM machines: The very first _binary_ (vs. decimal) machine was the IBM 701 in 1952 with 36 bits. Followed by 704, 709, 7090, 7094. See: http://www.columbia.edu/kermit/dec20.html Also, IIRC, early Honeywell and Burroughs machines had 36 bits. Again, IIRC, it was because the military needed/wanted 36 bits to make rocket/missile trajectory calculations accurate enough. See: https://en.wikipedia.org/wiki/36-bit_computing – Craig Estey Jul 26 '22 at 19:11
  • The Cyber series has 6/12 bits, at least the encoding of the data. – Thomas Matthews Jul 26 '22 at 19:53
  • Do custom ASICs count? I worked on a custom cryptography process that had 128-bit registers. – Thomas Matthews Jul 26 '22 at 19:54
  • @saxbophone It doesn't say so specifically but I was sure they weren't allowed to add more types to `stdint.h`. The compiler can define it's own type somewhere else. The biggest hurdle, and maybe why I remember wrong, is that the header usually belongs to the libc. And then it has to work with every compiler on the system. So it's hard to get a type added there. – Goswin von Brederlow Jul 26 '22 at 20:29
  • POSIX _mandates_ that `char` is _exactly_ 8 bits (i.e. `CHAR_BIT` is 8). That's been true for decades. AFAIK, the only arch still in use that violates that is some TI DSPs with `CHAR_BIT` of 16 (i.e. _not_ POSIX compliant). And, other types are defined in terms of "so many" chars (e.g. `short` is 2 `char`, etc.). IMO, things are much easier if each successively larger type is a power of 2 multiple of `char` (e.g. `char=8, short=16, int=32, long/long long=64, __int128=128`). Having `char=8` but `int=36` (or `int=24`) (e.g.) makes little sense and would break tons of existing code. – Craig Estey Jul 26 '22 at 21:24

0 Answers0