16

In the program listed below, the sizeof(int) and sizeof(long) are equal on my machine (both equal 4 bytes (or 32 bits)). A long, as far as I know, is 8 bytes. Is this correct? I have a 64-bit machine

#include <stdio.h>
#include <limits.h>

int main(void){
    printf("sizeof(short) = %d\n", (int)sizeof(short));
    printf("sizeof(int) = %d\n", (int)sizeof(int));
    printf("sizeof(long) = %d\n", (int)sizeof(long));
    printf("sizeof(float) = %d\n", (int)sizeof(float));
    printf("sizeof(double) = %d\n", (int)sizeof(double));
    printf("sizeof(long double) = %d\n", (int)sizeof(long double));
    return 0;
}
Deduplicator
  • 44,692
  • 7
  • 66
  • 118
ChrisMcJava
  • 2,145
  • 5
  • 25
  • 33
  • See [this](http://stackoverflow.com/questions/1240121/sizeofint-sizeoflong-sizeoflong-long-always-true) – devnull Sep 19 '13 at 17:18
  • I think the output of your code answers your question. – Charlie Burns Sep 19 '13 at 17:29
  • @CharlieBurns Well, a code example does not speak toward the standard since compilers may not be compliant, may have implementation specific behavior or may have extensions. So an example in code does not prove what it should be. – Shafik Yaghmour Sep 19 '13 at 18:59
  • @ShafikYaghmour his question does not mention anything about the standard. His question was 'A long, as far as I know, is 8 bytes. Is this correct?' The answer, obvious from his code, is "No, a long on your machine, with your compiler, is 4 bytes". – Charlie Burns Sep 19 '13 at 21:29

4 Answers4

34

A long, as far as I know, is 8 bytes. Is this correct?

No, this is not correct. The C and C++ specifications only state that long must be greater than or equal to 32 bits. int can be smaller, but on many platforms, in C and C++, long and int are both 32 bits.

This is a very good reason to prefer fixed width integer types such as int64_t if they're available to you and you're using C99 or a framework which provides you an equivalent type.

BenMorel
  • 34,448
  • 50
  • 182
  • 322
Reed Copsey
  • 554,122
  • 78
  • 1,158
  • 1,373
  • 5
    And `sizeof(long)` is 8 bytes on my very typical 64b machine, so I find "on most machines" hard to accept. – cnicutar Sep 19 '13 at 17:20
  • 2
    @cnicular: A machine is neither a C implementation nor determines what size `long` is in a C implementation targeting that machine. Do you mean `sizeof(long)` is 8 bytes in your very typical C implementation? – Eric Postpischil Sep 19 '13 at 17:59
  • @cnicutar Who said "on most machines"? The sizeof `long` even on 64-bit machines is also depends on the compiler and its options. On my 64-bit machine, 4 of 4 compilers default to sizeof(long) as 4. One compiler allows size 8. – chux - Reinstate Monica Sep 19 '13 at 18:05
  • 4
    Which is why I used "platform" not machine - by platform, I'm specifying the machine + it's runtime. – Reed Copsey Sep 19 '13 at 18:08
  • 1
    @chux The original answer contained something like "typical " - which is why my comment got the votes. And the original answer only mentioned C++, without C. Glad to see it improved in the meantime :-) – cnicutar Sep 19 '13 at 19:08
  • The contents of are available in C as . In fact, C had these types first. –  Sep 19 '13 at 20:06
7

It depends on your ABI. If you are using Windows, Microsoft chose a LLP64 model, so long and int are both 32-bits, while long long and pointers are 64-bits in 64-bit builds, for legacy reasons.

Most UNIX platforms chose a LP64 model, which makes int 32-bits, long, long long, and pointers 64-bits, for 64-bit builds.

But, as Reed Copsey notes, the standard only states minimum lengths, and relative lengths. And long must be greater than or equal to the length of int.

chmeee
  • 3,608
  • 1
  • 21
  • 28
5

C is not Java nor C#. As others wrote, C spec states minimum lengths and relative lengths. Why? C was designed to be low-level and to compile and run on virtually any hardware ever made.

If a spec promises the programmer too much, its implementation can get tricky (and slow) when hardware does not support such a thing. Java and C# developers don't care too much, they like the convenience of their languages for higher-level jobs and don't hesitate to install large virtual machines that take care of fulfilling all the promises (or at least most of them).

C programmers want to have control over the code even on machine instruction level. Complete control over the hardware. Only this way can all hardware features be used, and maximum performance be reached. But you have to be careful with your assumptions about the hardware you are using. As always, with great power comes great responsibility.

Just to illustrate that: C does not assume 8-bit bytes. See CHAR_BIT in <limits.h>.

A related question is Size of an integer in C.

Toby Speight
  • 27,591
  • 48
  • 66
  • 103
Palec
  • 12,743
  • 8
  • 69
  • 138
  • Certain aspects of the C standard have really not aged well. The standard-types headers improve a few things, but so far as I know there's nothing which really makes clear whether e.g. a comparison between a unsigned "at-least-8-bit" type and a signed "exactly-16-bit" type will be signed or unsigned. An implementation where operations on things shorter than 32 bits were possible but slower than those on larger types might use a 32-bit variable for the former type and a 16-bit one for the latter. I don't think there's any way the physically larger type could have a smaller rank, and thus... – supercat Jan 19 '14 at 22:58
  • ...I would think that on platforms where the "at least 8 bit" type was larger than an "exactly 16 bit" type, its rank would have to be higher as well, implying that the comparison would have to be done as unsigned. Having typedefs for particular sizes of types, without specifying how signed and unsigned values will interact seems like a recipe for disaster. If one were designing a new language, one could distinguish between unsigned values which are used to hold things which should wrap, and those which are used to hold *numbers*. One could then specify that the sum of... – supercat Jan 19 '14 at 23:03
  • ...a wrapping thing and a number would be a wrapping thing *of the same size as the original* (perhaps warning if the number was larger), while the sum of two numbers would be a type large enough to hold the result, if such a thing exists. A comparison between wrapping things and numbers would require that one be cast to the other, and comparisons between signed and unsigned values would be specified to be arithmetically correct. Such rules would largely eliminate the cases where type rankings matter. Unfortunately, they'd be incompatible with C as it exists. – supercat Jan 19 '14 at 23:06
4

No, the standard defines the minimums which for long is 4 bytes and int is 2 bytes. As per the C99 draft standard section 5.2.4.2.1 Sizes of integer types paragraph 1 says:

[...]Their implementation-defined values shall be equal or greater in magnitude[...]

and for long we have:

 LONG_MIN -2147483647 // -(2^31 - 1)
 LONG_MAX +2147483647 // 2^31 - 1     

which is 4 bytes. For completeness sake we have the following for int:

 INT_MIN -32767 // -(2^15 - 1)
 INT_MAX +32767 // 2^15 - 1

which is 2 bytes.

Shafik Yaghmour
  • 154,301
  • 39
  • 440
  • 740
  • No, the types are specified in the number of *bits*. C says nothing about *bytes*, only about the minimum number of bits that comprise each type. This isn't just pedantry, because you don't know in advance what `CHAR_BIT` will be until you examine it. – Toby Speight Jun 21 '17 at 16:34
  • @TobySpeight: C says a *lot* about bytes. But the minimal required sizes of the integer types are not defined either as bits or as bytes; they're defined as ranges of values. `int` has a range of at least -32767 to +32767. We can infer from the range that its size is at least 16 bits, and we can infer from the range and the value of `CHAR_BIT` how many bytes it requires (glossing over padding bits), but neither is specified directly. – Keith Thompson Dec 20 '17 at 00:18
  • It's entirely possible for `long` and `int` both to be 1 byte in size (but only if `CHAR_BIT >= 32`). – Keith Thompson Dec 20 '17 at 00:19
  • @Keith, your last comment expresses much more succinctly than mine what's wrong with this answer - any type may have a size of 1, if `CHAR_BIT` is sufficiently large (and that's not unknown; DSPs commonly lack addressing of small types). – Toby Speight Dec 20 '17 at 09:10