13

My question is relatively simply. On 32bit platforms it's best to use Int32 as opposed to short or long due to to the cpu processing 32 bits at a time. So on a 64 bit architecture does this mean it's faster to use longs for performance? I created a quick and dirty app that copies int and long arrays to test the benchmarks. Here is the code(I did warn it's dirty):

static void Main(string[] args)
    {
        var lar = new long[256];

        for(int z = 1; z<=256;z++)
        {
            lar[z-1] = z;
        }
        var watch = DateTime.Now;
        for (int z = 0; z < 100000000; z++)
        {
            var lard = new long[256];
            lar.CopyTo(lard, 0);

        }
        var res2 = watch - DateTime.Now;

        var iar = new int[256];

        for (int z = 1; z <= 256; z++)
        {
            iar[z - 1] = z;
        }

        watch = DateTime.Now;

        for (int z = 0; z < 100000000; z++)
        {
            var iard = new int[256];
            iar.CopyTo(iar, 0);

        }

        var res1 = watch - DateTime.Now;
        Console.WriteLine(res1);
        Console.WriteLine(res2);


    }

The results it produces make long about 3 times as fast as int. Which makes me curious to whether I should start using longs for counters and such. I also did a similar counter test and long was insignificantly faster. Does anybody have any input on this? I also understand even if longs are faster they will still take up twice as much space.

Omego2K
  • 139
  • 1
  • 5
  • 2
    For starters use StopWatch instead of DateTime.Now (more accurate for these kinds of performance tests). Secondly in the int iteration you are copying iar to itself. Not sure if that will affect anything but just in case. – Flatliner DOA Feb 15 '15 at 05:30

2 Answers2

3

No. It takes longer to do 64-bit on a 32-bit CPU because the CPU can only handle 32-bits at a time. A 64-bit CPU will just ignore the missing 32-bits.

Also, try not to preemptively optimize too much. Only optimize when there is a noticeable bottleneck.

beatgammit
  • 19,817
  • 19
  • 86
  • 129
  • I think you misunderstood what I stated. I agree that it takes longer to use 64bit sized types on a 32bit. My question is whether on a 64bit cpu I should use longs instead of ints because longs are 64bit and ints are 32bit. – Omego2K Sep 12 '11 at 17:25
  • 1
    "A 64-bit CPU will just ignore the missing 32-bits". There is no difference in speed. Anyway, if you're trying to optimize at the CPU level, don't use a language that compiles to byte-code, use assembly language or C. – beatgammit Sep 12 '11 at 17:44
  • Java compiles to byte code and .NET to IL. I'm just curious is all. If that's true it will just ignore then why is using shorts slower on a 32bit system. Shorts are two bytes and get padded to 32bits on a 32bit cpu. From what I understand if you use types that are less than the cpu architecture it pads it to be equal to it. Which means if you use a short(2 bytes) with a 32bit cpu it would pad the short to 32bits when it gets processed. Afterwards, when time comes to for it to be read only the first 16 bits are read. Also why is using longs in my copy example 3 times faster than using ints? – Omego2K Sep 13 '11 at 01:20
  • This is a continuation of my above comment. I am really just curious. Also I don't know assembly and don't prefer C or even C++ since interpreted languages are a lot simpler. – Omego2K Sep 13 '11 at 01:21
3

There's a similar question on this here: Will using longs instead of ints benefit in 64bit java

Typically, in a real application you're more concerned about cache misses, so this is less of a concern overall.

Community
  • 1
  • 1
entitledX
  • 670
  • 1
  • 7
  • 13
  • So this raises a couple of questions. 1) Why does my test application show longs being about 3 times faster as ints. 2) How do languages like C++ handle this since ints are 64bit in it. Do they use shorts instead of ints? – Omego2K Sep 12 '11 at 17:37