23

This is related to following question,

How to Declare a 32-bit Integer in C

Several people mentioned int is always 32-bit on most platforms. I am curious if this is true.

Do you know any modern platforms with int of a different size? Ignore dinosaur platforms with 8-bit or 16-bit architectures.

NOTE: I already know how to declare a 32-bit integer from the other question. This one is more like a survey to find out which platforms (CPU/OS/Compiler) supporting integers with other sizes.

Community
  • 1
  • 1
ZZ Coder
  • 74,484
  • 29
  • 137
  • 169

8 Answers8

44

As several people have stated, there are no guarantees that an 'int' will be 32 bits, if you want to use variables of a specific size, particularly when writing code that involves bit manipulations, you should use the 'Standard Integer Types' mandated by the c99 specification.

int8_t
uint8_t
int32_t
uint32_t

etc...

they are generally of the form [u]intN_t, where the 'u' specifies that you want an unsigned quantity, and N is the number of bits

the correct typedefs for these should be available in stdint.h on whichever platform you are compiling for, using these allows you to write nice, portable code :-)

David Claridge
  • 6,159
  • 2
  • 27
  • 25
  • Of course, i just checked I and I think this is broken on windows: http://blogs.msdn.com/oldnewthing/archive/2005/01/31/363790.aspx But that does not sound right so I'm going to double check on my brothers windows machine. – Robert Massaioli Jan 26 '10 at 07:16
  • 1
    Keep in mind these types are _optional_ in C99. If an implementation provides an underlying type of the correct properties, it _must_ give you the corresponding `[u]intN_t` type but there's no guarantee an implementation will have such a type. However, they would be few and far between, so this is probably the best approach if your compiler is C99-compliant. – paxdiablo Jan 01 '12 at 04:36
  • 2
    @paxdiablo The `{,u}int_{least,fast}{8,16,32,16}_t` types are *required* in C99 (§7.18.1.1) though. They can be used for much the same purpose -- the only caveat being that they're only guaranteed to be *at least* (instead of *exactly*) 8/16/32/64 bits wide. – Craig Barnes Jun 01 '18 at 10:45
  • Good point, @Craig, the types explicitly listed in this answer are not, but you can always use a possibly-wider type and ignore any bits left of the thirty-second. – paxdiablo Jun 01 '18 at 11:01
17

"is always 32-bit on most platforms" - what's wrong with that snippet? :-)

The C standard does not mandate the sizes of many of its integral types. It does mandate relative sizes, for example, sizeof(int) >= sizeof(short) and so on. It also mandates minimum ranges but allows for multiple encoding schemes (two's complement, ones' complement, and sign/magnitude).

If you want a specific sized variable, you need to use one suitable for the platform you're running on, such as the use of #ifdef's, something like:

#ifdef LONG_IS_32BITS
    typedef long int32;
#else
    #ifdef INT_IS_32BITS
        typedef int int32;
    #else
        #error No 32-bit data type available
    #endif
#endif

Alternatively, C99 and above allows for exact width integer types intN_t and uintN_t:


  1. The typedef name intN_t designates a signed integer type with width N, no padding bits, and a two's complement representation. Thus, int8_t denotes a signed integer type with a width of exactly 8 bits.
  2. The typedef name uintN_t designates an unsigned integer type with width N. Thus, uint24_t denotes an unsigned integer type with a width of exactly 24 bits.
  3. These types are optional. However, if an implementation provides integer types with widths of 8, 16, 32, or 64 bits, no padding bits, and (for the signed types) that have a two's complement representation, it shall define the corresponding typedef names.
paxdiablo
  • 854,327
  • 234
  • 1,573
  • 1,953
  • 2
    Beat me to it :) Relying upon the size of a built-in variable in C or C++ is inherently a bug. – kyoryu Aug 05 '09 at 04:23
  • 8
    The C standard does mandate minimum ranges (which implies minimum sizes). The minimum range of int is -32767 to +32767, and the minimum range of long is -2147483647 to +2147483647. – caf Aug 05 '09 at 04:47
  • 3
    (which means that if you just want a variable that can store the range of a 32 bit integer, use long or unsigned long - no preprocessor bodginess required). – caf Aug 05 '09 at 04:50
  • True, that's okay for ensuring that a data type will hold at least a given value but you may want an exactly-32-bit value (e.g., for binary writes to a file) rather than an at-least-32-bit one. That's where you need the preprocessor. – paxdiablo Aug 05 '09 at 11:28
  • 1
    Sorry late to the party here, isn't your code incorrect? shouldn't the second typedef by typedef int int32; ? – Justin Meiners May 02 '13 at 03:27
  • 3
    @Justin, late you may be but you appear to be the only one who's noticed in the last three+ years. Good catch, changed to fix. – paxdiablo May 02 '13 at 09:35
  • @paxdiablo oh good, I thought I had gone crazy for a second there. Thank you! – Justin Meiners May 02 '13 at 15:06
  • 3
    @caf: Always using `long` though when an `int` would suffice seems silly, though. I need a variable which can store the numbers 0 through 999,999,999, and I'd rather not have it be 64-bits wide (as `long`s sometimes are) when 32-bits is plenty. It seems weird that the integer types `intN_t` weren't available from the beginning. They realized CPUs of different bit widths would be running C software, but they didn't bother having a portable way of guaranteeing bit width for 30 years? – ArtOfWarfare Jan 20 '15 at 18:59
9

At this moment in time, most desktop and server platforms use 32-bit integers, and even many embedded platforms (think handheld ARM or x86) use 32-bit ints. To get to a 16-bit int you have to get very small indeed: think "Berkeley mote" or some of the smaller Atmel Atmega chips. But they are out there.

Norman Ramsey
  • 198,648
  • 61
  • 360
  • 533
  • 2
    Yay, the only answer that actually answers the question! However, it would be very nice to know where you got that answer and the specific compilers/platforms that use 32bit, or maybe just the ones that DON'T use 32 bit. – Winter Dragoness Dec 09 '14 at 18:45
6

No. Small embedded systems use 16 bit integers.

starblue
  • 55,348
  • 14
  • 97
  • 151
  • 1
    Fun fact: `sizeof(int) == 1` on some TI DSPs, e.g. C2000. Both `char` and `int` have 16 bits (and `short` too, of course). – starblue Jun 01 '18 at 05:23
2

It vastly depends on your compiler. Some compile them as 64-bit on 64-bit machines, some compile them as 32-bit. Embedded systems are their own little special ball of wax.

Best thing you can do to check:

printf("%d\n", sizeof(int));

Note that sizeof will print out bytes. Do sizeof(int)*CHAR_BIT to get bits.

Code to print the number of bits for various types:

#include <limits.h>
#include <stdio.h>

int main(void) {
    printf("short is %d bits\n",     CHAR_BIT * sizeof( short )   );
    printf("int is %d bits\n",       CHAR_BIT * sizeof( int  )    );
    printf("long is %d bits\n",      CHAR_BIT * sizeof( long )    );
    printf("long long is %d bits\n", CHAR_BIT * sizeof(long long) );
    return 0;
}
Sinan Ünür
  • 116,958
  • 15
  • 196
  • 339
Eric
  • 92,005
  • 12
  • 114
  • 115
  • 1
    This is wrong on many dimensions. First, `sizeof` can operate on types so there is no need for `randomint`. Second, `CHAR_BITS` is not guaranteed to be eight. There are a few more things but these are the errors related to the question. – Sinan Ünür Aug 05 '09 at 04:18
  • True, not always 8-bits in a byte – Ed S. Aug 05 '09 at 04:21
  • @Eric its `CHAR_BIT`. I misspelled it in my comment. – Sinan Ünür Aug 05 '09 at 04:26
  • 2
    It's also not guaranteed that every bit in the underlying representation of the type is a value bit - you might have things like overflow bits (or even padding bits). – caf Aug 05 '09 at 04:52
1

TI are still selling OMAP boards with the C55x DSPs on them, primarily used for video decoding. I believe the supplied compiler for this has a 16 bit int. It is hardly dinosaur (the Nokia 770 was released in 2005), although you can get 32 bit DSPs.

Most code you write, you can safely assume it won't ever be run on a DSP. But perhaps not all.

Steve Jessop
  • 273,490
  • 39
  • 460
  • 699
0

Well, most ARM-based processors can run Thumb code, which is a 16-bit mode. That includes the yet-only-rumored Android notebooks and the bleeding-edge smartphones.

Also, some graphing calculators use 8-bit processors, and I'd call those fairly modern as well.

Christoffer
  • 12,712
  • 7
  • 37
  • 53
  • 2
    You can't have a conforming C implementation with 8 bit int, so even if those calculators are 8-bit, if they have a C compiler then it must make int at least 16 bit. – Steve Jessop Aug 05 '09 at 12:13
  • 1
    Thumb code still uses 32-bit int; the '16-bit' aspect is just the size of the encoded instructions. – Matthew Wightman Aug 05 '09 at 12:30
  • In the ANSI C standard, the smallest acceptable size for an int and short int is 16bits, so there is no way to have 8 bits..:) – Samia Ruponti Apr 19 '15 at 03:34
0

If you are also interested in the actual Max/Min Value instead of the number of bits, limits.h contains pretty much everything you want to know.

Michael Stum
  • 177,530
  • 117
  • 400
  • 535