I have two pieces of code that are identical in C# and Java. But the Java one goes twice as fast. I want to know why. Both work with the same principal of using a big lookup table for performance.
Why is the Java going 50% faster than C#?
Java code:
int h1, h2, h3, h4, h5, h6, h7;
int u0, u1, u2, u3, u4, u5;
long time = System.nanoTime();
long sum = 0;
for (h1 = 1; h1 < 47; h1++) {
u0 = handRanksj[53 + h1];
for (h2 = h1 + 1; h2 < 48; h2++) {
u1 = handRanksj[u0 + h2];
for (h3 = h2 + 1; h3 < 49; h3++) {
u2 = handRanksj[u1 + h3];
for (h4 = h3 + 1; h4 < 50; h4++) {
u3 = handRanksj[u2 + h4];
for (h5 = h4 + 1; h5 < 51; h5++) {
u4 = handRanksj[u3 + h5];
for (h6 = h5 + 1; h6 < 52; h6++) {
u5 = handRanksj[u4 + h6];
for (h7 = h6 + 1; h7 < 53; h7++) {
sum += handRanksj[u5 + h7];
}}}}}}}
double rtime = (System.nanoTime() - time)/1e9; // time given is start time
System.out.println(sum);
It just enumerates through all possible 7 card combinations. The C# version is identical except at the end it uses Console.writeLine.
The lookuptable is defined as:
static int handRanksj[];
Its size in memory is about 120 Megabytes.
The C# version has the same test code. It's measured with Stopwatch instead of nanoTime() and uses Console.WriteLine
instead of System.out.println("")
but it takes at least double the time.
Java takes about 400ms. For compilation in java I use the -server flag. In C# the build is set to release without debug or trace defines.
What is responsible for the speed difference?