1

Recently, I've been running some tests with C++ and VB.NET to compare execution speeds. In the last thread I posted, I talked about how I had encountered the fact that C++ was executing just as fast as VB, but got that issue resolved. Now I'm hitting my head against another wall:

I had made a DLL for VB.NET to test out this theory and compare in just one program side by side execution time of identical VB.NET and C++ code. But the interesting thing? VB.NET's execution time improved such that it was now exactly identical to the execution time of C++. Spending some time with the problem, I discovered that the "Target CPU" option in advanced compile options in Visual Studio 2008 was the culprit!

Since I'm running 64-bit Windows 7, I figured making the target CPU x64 would yield the best execution time. Wrong. Here are the results in execution time of a Windows Forms application for VB.NET, calculating all the prime numbers up to 10,000,000 and getting their sum.

Any CPU: 15.231 seconds

x86: 10.858 seconds

x64: 15.236 seconds

Below is the code I'm using, feel free to test it yourself:

Public Class Form1

Private Sub Form1_Load(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles MyBase.Load
    Dim watch As New Stopwatch
    watch.Start()

    Dim maxVal As Long = 10000000
    Dim sumOfPrimes As Long = 0
    For i As Integer = 2 To maxVal
        If (isPrime(i) = True) Then
            sumOfPrimes += i
        End If
    Next
    watch.Stop()
    Console.WriteLine(watch.ElapsedMilliseconds)
    Console.WriteLine("The sum of all the prime numbers below " & maxVal & " is " & sumOfPrimes)
End Sub

Function isPrime(ByVal NumToCheck As Integer) As Boolean
    For i As Integer = 2 To (Math.Sqrt(CDbl(NumToCheck)))
        If (NumToCheck Mod i = 0) Then
            Return False
        End If
    Next
    Return True
End Function

End Class

Why would selecting the target CPU as 32-bit when I'm running 64-bit cause a performance increase? Any help with this problem would be much appreciated.

Community
  • 1
  • 1
AndyPerfect
  • 1,180
  • 1
  • 10
  • 25
  • Is the C++ program being complied in Debug or Release mode? – JustBoo Aug 13 '10 at 16:40
  • heh, I guess I could probably remove that C++ tag. What I was actually asking changed over time and it ended up being not applicable to C++. I'll go ahead and remove the tag. – AndyPerfect Aug 13 '10 at 16:43

3 Answers3

2

There are a number of differences between 32 and 64 bit mode, which might each skew the performance difference one way or the other.

In 64-bit mode, the CPU has more registers, and each register is larger, which enables some operations to be performed faster (the higher register count may avoid memory accesses, for example)

But 32-bit has at least one advantage as well:

Pointers are 32 bits wide, where they are 64 in 64-bit mode. For programs that rely heavily on pointers, this can lead to significantly higher memory usage in 64-bit mode, which means a smaller fraction of the program data will fit in the CPU cache, and so performance may decrease in 64-bit mode, due to the higher number of cache misses.

Another factor is that the .NET framework isn't equally good at both. They don't yet have feature parity between the 32- and 64-bit versions of the CLR, and the JIT might be not be tuned as well for 64-bit code as it is for the 32-bit case.

jalf
  • 243,077
  • 51
  • 345
  • 550
  • Based on the tests I ran, 64 bit was always faster using VB. – dbasnett Aug 14 '10 at 19:42
  • Would it be safe to say then, that the majority of the time, .NET compiled programs run faster if run as a 32 bit program, or is it machine-specific? I'm seeing a wide range of answers across the board, some saying 64 bit runs better, others something else. I guess you're left with testing the code in x64 and x86 and simply seeing which works better with your code? – AndyPerfect Aug 16 '10 at 15:22
  • 1
    @Andy: yep, test it. There are cases where one mode might be faster, and others where the other mode wins out. Most likely, the difference is too small to matter. And if it does matter, measure, measure, measure – jalf Aug 16 '10 at 16:16
0

.NET 3.5 SP1 (that is, Visual Studio 2008 SP1) came with a number of performance enhancements that let the JIT:er inline more effeciently. Unfortunately those enhancements only target x86:

How are value types implemented in the 32-bit CLR? What has been done to improve their performance?

However with .NET 4.0 I found that x64 and x86 now gives equal or slightly better performance for x64.

Cyril Gandon
  • 16,830
  • 14
  • 78
  • 122
0

I ran, in Release mode, the following code. I used Ctrl + F5 to run the code.

Public Class Form1

    Function isPrime(ByVal NumToCheck As Integer) As Boolean
        Dim limit As Integer = CInt(Math.Ceiling(Math.Sqrt(CDbl(NumToCheck))))
        For i As Integer = 2 To limit
            If (NumToCheck Mod i = 0) Then
                Return False
            End If
        Next
        Return True
    End Function

    Dim watch As New Stopwatch
    Private Sub Button1_Click(ByVal sender As System.Object, _
                              ByVal e As System.EventArgs) Handles Button1.Click
        watch.Reset()
        watch.Start()

        Dim maxVal As Integer = 2000000
        Dim sumOfPrimes As Long = 0
        For i As Integer = 2 To maxVal
            If isPrime(i) Then
                sumOfPrimes += i
            End If
        Next
        watch.Stop()
        Label1.Text = watch.ElapsedMilliseconds.ToString
        Label2.Text = "The sum of all the prime numbers below " & maxVal & " is " & sumOfPrimes
    End Sub
End Class

AnyCPU was faster than x86 every time. I am using .NET 4.0.

Peter Mortensen
  • 30,738
  • 21
  • 105
  • 131
dbasnett
  • 11,334
  • 2
  • 25
  • 33