13

In a 64-bit CPU, if the int is 32 bits whereas the long is 64 bits, would the long be more efficient than the int?

  • 12
    Define "efficient". – Ken D Sep 16 '12 at 13:49
  • 2
    If your main concern is performance, consider using the `int_fast32_t` like types from `stdint.h`. – Macmade Sep 16 '12 at 13:51
  • possible duplicate of [Which is better? To use short or int?](http://stackoverflow.com/questions/1904857/which-is-better-to-use-short-or-int) – bmargulies Sep 16 '12 at 14:00
  • @LordCover: I'm actually asking about speed, sorry for my confusing question:) – rubbishbin01 rubbishbin01 Sep 16 '12 at 14:45
  • Just don't ask yourself the question. Most probably your compiler has got things right such that the "sematic" integer types are the most efficient. Use `size_t`, `ptrdiff_t`, `uintptr_t` and Co wherever you may. Only use fixed width types when you need to know something precise on the width of the type. And then use `uint64_t` etc to make your intention clear. – Jens Gustedt Sep 16 '12 at 14:59
  • 1
    @Macmade Unfortunately, my compiler (gcc-4.5.1 on 64-bit Linux) typedefs `int_fast32_t` to `long int`, but for many (if not most) computations, `int` is faster. So if performance is the main concern, one should not rely on the compiler's preconceptions, but measure. – Daniel Fischer Sep 16 '12 at 15:02

4 Answers4

9

The main problem with your question is that you did not define "efficient". There are several possible efficiency related differences.

Of course if you need to use 64 bits, then there's no question. But sometimes you could use 32 bits and you wonder if it would be better to use 64 bits instead.

Data Size Efficiency

Using 32 bits will use less memory. This is more efficient especially if you use a lot of them. Not only it's more efficient in the sense that you may not get to swap out, but also in the sense that you'll have fewer cache misses. If you use just a few then the efficiency difference is irrelevant.

Code Size Efficiency

This is heavily dependent on the architecture. Some architectures will need longer instructions to manipulate 32 bit values, others will need longer instructions to manipulate 64 bits values and others will make no difference. On the intel processors, for example, 32 bits is the default operand size even for 64 bits code. Smaller code may have a little advantage both in cache behavior and in pipeline usage. But it is dependent on the architecture which operand size will use smaller code.

Execution Speed Efficiency

In general there should be no difference beyond the one implied by code size. Once the instruction has been decoded the timing for mere execution are generally identical. However, once again, this is in fact architecture specific. There are architectures that do not have native 32 bit arithmetic, for example.

My suggestion:

If it's just some local variables or data in small structures that you do not allocate in huge quantities, use int and do it in a way that does not assume a size, so that a new version of the compiler or a different compiler that use a different size for int will still work.

However if you have huge arrays or matrixes, then use the smallest type you can use and make sure its size is explicit.

Analog File
  • 5,280
  • 20
  • 23
5

On the common x86-64 architecture, 32-bit arithmetic is never slower than 64 bit arithmethic. So int is always the same speed or faster than long. On other architectures that don't actually have builtin 32-bit arithmetic, such as the MMIX, this might not hold.

Basic wisdom holds: Write it without considering such micro-optimizations and if necessary, profile and optimize.

fuz
  • 88,405
  • 25
  • 200
  • 352
  • "On the common x86-64 architecture, 32-bit arithmetic is never slower than 64 bit arithmethic.", reason is...? – rubbishbin01 rubbishbin01 Sep 16 '12 at 14:51
  • 1
    rubbishbin01: x86-64 is basically an extension to the 32bit architecture x86. Most opcodes also have 32 bit variants that work in the same way as in x86 mode. Also because many users still use 32 bit only operating systems, those processors are optimized to operate on 32 as fast as possible. – fuz Sep 16 '12 at 14:58
  • I guess it's because both intel and AMD designed the architecture to be like that. More specifically the default operand size is either 32 bits or 16 bits. Even for 64 bit code the default operand size is 32 bit. While mere execution of a decoded instruction is identical, 32 bit operand instructions are either smaller or the same size of 64 bit instructions, implying non-worse cache and pipeline behavior. – Analog File Sep 16 '12 at 14:58
1

If you are trying to store 64 bits of data, use a long. If you aren't going to need the 64 bits use the regular 32 bit int.

BSull
  • 329
  • 2
  • 10
-3

Yes, a 64bit number would be more efficient than a 32bit number.

On a 64bit CPU most compilers would give you 64bit if you ask for an long int though.

To see the size with your current compiler:

#include <stdio.h>

int main(int argc, char **argv){
    long int foo;
    printf("The size of an int is: %ld bytes\n", sizeof(foo));
    printf("The size of an int is: %ld bits\n", sizeof(foo) * 8);
    return 0;
}

If your cpu is running in 64bit mode you can expect that the CPU will use that regardless of what you ask. All the registers are 64bit, the operations are 64bit so if you want to get a 32bit result it will generally convert the 64bit result to 32bit for you.

The limits.h on my system defines long int as:

/* Minimum and maximum values a `signed long int' can hold.  */
#  if __WORDSIZE == 64
#   define LONG_MAX 9223372036854775807L
#  else
#   define LONG_MAX 2147483647L
#  endif
#  define LONG_MIN  (-LONG_MAX - 1L)
Wolph
  • 78,177
  • 11
  • 137
  • 148
  • 3
    Should I believe the 2nd line of this answer? – Ken D Sep 16 '12 at 13:52
  • That's true, addresses are 64bit (fits on registers, cache line), no matter how much you allocate. but wouldn't cache write be faster/efficient as it takes less of it? – nullpotent Sep 16 '12 at 13:56
  • 1
    I'm using a 64-bit CPU (and OS), but `sizeof(int)` is still `4`. Are you [confusing it with the size of a pointer](https://gist.github.com/3732581)? –  Sep 16 '12 at 14:09
  • 2
    Some 64-bit architectures such as x86-64 actually have 32bit arithmetic operations. On those, 32 bit arithmetic is never slower than 64 bit arithmetic and sometimes (div/mul) even much faster. – fuz Sep 16 '12 at 14:50