My question is relatively simply. On 32bit platforms it's best to use Int32 as opposed to short or long due to to the cpu processing 32 bits at a time. So on a 64 bit architecture does this mean it's faster to use longs for performance? I created a quick and dirty app that copies int and long arrays to test the benchmarks. Here is the code(I did warn it's dirty):
static void Main(string[] args)
{
var lar = new long[256];
for(int z = 1; z<=256;z++)
{
lar[z-1] = z;
}
var watch = DateTime.Now;
for (int z = 0; z < 100000000; z++)
{
var lard = new long[256];
lar.CopyTo(lard, 0);
}
var res2 = watch - DateTime.Now;
var iar = new int[256];
for (int z = 1; z <= 256; z++)
{
iar[z - 1] = z;
}
watch = DateTime.Now;
for (int z = 0; z < 100000000; z++)
{
var iard = new int[256];
iar.CopyTo(iar, 0);
}
var res1 = watch - DateTime.Now;
Console.WriteLine(res1);
Console.WriteLine(res2);
}
The results it produces make long about 3 times as fast as int. Which makes me curious to whether I should start using longs for counters and such. I also did a similar counter test and long was insignificantly faster. Does anybody have any input on this? I also understand even if longs are faster they will still take up twice as much space.