85

What's the best way to declare an integer type which is always 4 byte on any platforms? I don't worry about certain device or old machines which has 16-bit int.

bdonlan
  • 224,562
  • 31
  • 268
  • 324
ZZ Coder
  • 74,484
  • 29
  • 137
  • 169
  • 7
    In C, a byte does not have to be 8 bits, so 32-bits and 4 bytes could mean different things. – KTC Aug 04 '09 at 18:37
  • 1
    @KTC: are there any platforms that define byte differently? – Mr. Shiny and New 安宇 Aug 04 '09 at 18:57
  • 1
    I am also curious to know where char!=8bits and a byte!=8bits. char!=8bits seems ok, as I can have char==4bits in my own undesigned system or some old system, but where does byte!=8bits ?? – seg.server.fault Aug 04 '09 at 19:00
  • 1
    Wiki (http://en.wikipedia.org/wiki/Byte) has a nice history of the usage, and examples where byte != 8-bits. They are rarer today than they used to be, but the C standard is careful to avoid the assumption. – RBerteig Aug 04 '09 at 19:28
  • @seg.server.fault, In C (and C++), char === 1 byte. It just doesn't have to have 8 bits. The number of bit is defined as CHAR_BIT in , which has to be at least 8. – KTC Aug 04 '09 at 20:04
  • I used to use a machine called Cyber something made by CDC, which has 9-bit byte. But I assume these days are long gone. – ZZ Coder Aug 04 '09 at 20:10
  • 3
    Quite a few DSPs and the like have 16bit chars (and C has no concept of a "byte" other than char - it is in effect the smallest addressable unit of memory). – Steve Jessop Aug 04 '09 at 22:54
  • As an existence proof, have a table: http://www.insidedsp.com/Articles/tabid/64/articleType/ArticleView/articleId/178/Getting-Better-DSP-Code-Out-of-Your-Compiler.aspx – Steve Jessop Aug 04 '09 at 22:56
  • 2
    One of the Honeyboxen we still have has 6-bit and 9-bit bytes based on the addressing mode you're in. – user7116 Oct 21 '09 at 17:25

10 Answers10

122
#include <stdint.h>

int32_t my_32bit_int;
Corey D
  • 4,689
  • 4
  • 25
  • 33
  • 3
    Just to note intN_t (and uintN_t) is optional in terms of standard. It is required to be defined if and only if the the system has types that meet the requirement. – KTC Aug 04 '09 at 18:39
  • 38
    That's what you want though. If the code really requires a 32-bit int, and you try to compile it on a platform that doesn't support them, you *want* the compiler to punt back to the developer. Having it pick some other size and go on would be horrible. – T.E.D. Aug 04 '09 at 19:20
  • 13
    Note that the header '``' is explicitly documented to include the header '``' (this is not usual for C headers), but the '``' header may be available where '``' is not and may be a better choice for portability. The '``' header is an invention of the standards committee, and was created so that free-standing implementations of C (as opposed to hosted implementations - normal ones) only have to support '``' and not necessarily '``' too (it would also mean supporting '``', which is otherwise not necessary). – Jonathan Leffler Aug 04 '09 at 19:28
  • 2
    Is there a way to define the int32_t as unsigned? – Matthew Herbst Apr 21 '14 at 21:22
  • 6
    @MatthewHerbst, `uint32_t`. – user545424 May 06 '14 at 17:02
  • I used this in my code for a 32 bit number which broke it, I used `uint32_t` instead which worked. – Crizly Apr 10 '15 at 15:04
15

C doesn't concern itself very much with exact sizes of integer types, C99 introduces the header stdint.h , which is probably your best bet. Include that and you can use e.g. int32_t. Of course not all platforms might support that.

Mad Physicist
  • 107,652
  • 25
  • 181
  • 264
nos
  • 223,662
  • 58
  • 417
  • 506
12

Corey's answer is correct for "best", in my opinion, but a simple "int" will also work in practice (given that you're ignoring systems with 16-bit int). At this point, so much code depends on int being 32-bit that system vendors aren't going to change it.

(See also why long is 32-bit on lots of 64-bit systems and why we have "long long".)

One of the benefits of using int32_t, though, is that you're not perpetuating this problem!

Brooks Moses
  • 9,267
  • 2
  • 33
  • 57
  • There's no need to “ignore systems with 16-bit int”, long is guaranteed to be at least 32-bit wide everywhere. – Bastien Léonard Aug 04 '09 at 20:30
  • 5
    Right, but using "long" doesn't address the initial request, which is something that's exactly 32 bits. On (at least some flavors of) 64-bit Linux, for example, a long is 64 bits -- and that's something that's likely to come up in actual practice. – Brooks Moses Aug 04 '09 at 23:16
5

You need to include inttypes.h instead of stdint.h because stdint.h is not available on some platforms such as Solaris, and inttypes.h will include stdint.h for you on systems such as Linux. If you include inttypes.h then your code is more portable between Linux and Solaris.

This link explains what I'm saying: HP link about inttypes.h

And this link has a table showing why you don't want to use long or int if you have an intention of a certain number of bits being present in your data type. IBM link about portable data types

doubleDown
  • 8,048
  • 1
  • 32
  • 48
Cutlasj
  • 199
  • 2
  • 3
5

You could hunt down a copy of Brian Gladman's brg_types.h if you don't have stdint.h.

brg_types.h will discover the sizes of the various integers on your platform and will create typedefs for the common sizes: 8, 16, 32 and 64 bits.

cfrantz
  • 71
  • 2
  • Actually in looking at a few "brg_types.h" I found, this file only defines unsigned integers (eg. "uint_8t", "uint_16t", "uint_32t" and "uint_64t"). The OP needed signed integer. – swdev Mar 23 '16 at 07:20
4

C99 or later

Use <stdint.h>.

If your implementation supports 2's complement 32-bit integers then it must define int32_t.

If not then the next best thing is int_least32_t which is an integer type supported by the implementation that is at least 32 bits, regardless of representation (two's complement, one's complement, etc.).

There is also int_fast32_t which is an integer type at least 32-bits wide, chosen with the intention of allowing the fastest operations for that size requirement.

ANSI C

You can use long, which is guaranteed to be at least 32-bits wide as a result of the minimum range requirements specified by the standard.

If you would rather use the smallest integer type to fit a 32-bit number, then you can use preprocessor statements like the following with the macros defined in <limits.h>:

#define TARGET_MAX 2147483647L

#if   SCHAR_MAX >= TARGET_MAX
  typedef signed char int32;
#elif SHORT_MAX >= TARGET_MAX
  typedef short int32;
#elif INT_MAX   >= TARGET_MAX
  typedef int int32;
#else
  typedef long int32;
#endif

#undef TARGET_MAX
Veltas
  • 1,003
  • 6
  • 19
1

If stdint.h is not available for your system, make your own. I always have a file called "types.h" that have typedefs for all the signed/unsigned 8, 16, and 32 bit values.

Robert Deml
  • 12,390
  • 20
  • 65
  • 92
1

You can declare 32 bits with signed or unsigned long.

int32_t variable_name;
uint32_t variable_name;
moto
  • 38
  • 4
0

also depending on your target platforms you can use autotools for your build system

it will see if stdint.h/inttypes.h exist and if they don't will create appropriate typedefs in a "config.h"

Spudd86
  • 2,986
  • 22
  • 20
0

stdint.h is the obvious choice, but it's not necessarily available.

If you're using a portable library, it's possible that it already provides portable fixed-width integers. For example, SDL has Sint32 (S is for “signed”), and GLib has gint32.

Bastien Léonard
  • 60,478
  • 20
  • 78
  • 95