8

As is explained in this MSDN article, when one uses .NET Reflection API such as InvokeMember, the first call takes much more time to run than the subsequent calls because of caching of metadata.

When I tested the direct method call without using Reflection, I also see the similar effect both on Mono and .NET 4.

The first number is the result of the operation, and the second number after '-' is the time spent for this operation in ms. I used '<--' mark to identify the first method call.

300 - 0.192 <--
300 - 0.004
300 - 0.004
-100 - 0.096 <--
-100 - 0.004
-100 - 0.004

Why is this? I can understand that the first call can be slower, but 50x slower is not what I expected.

Attached is the source code to get this result.

library

namespace MyClass
{
    public class Calculator
    {
        public int Value1 {get; set;}
        public int Value2 {get; set;}
        public Calculator()
        {
            Value1 = 100;
            Value2 = 200;
        }

        public int Add(int val1, int val2)
        {
            Value1 = val1; Value2 = val2;
            return Value1 + Value2;
        }

        public int Sub(int val1, int val2)
        {
            Value1 = val1; Value2 = val2;
            return Value1 - Value2;
        }
    }
}

Code to call this library

// http://msdn.microsoft.com/en-us/magazine/cc163759.aspx
using System;
using System.IO;
using System.Reflection;
using System.Diagnostics;
using System.Collections.Generic;
using MyClass;

class TestOne
{
    static void DirectTest()
    {
        Stopwatch sw;
        Calculator t = new Calculator();

        sw = Stopwatch.StartNew();
        int value1 = t.Add(100,200);
        sw.Stop();
        double time1 = sw.Elapsed.TotalMilliseconds;

        sw = Stopwatch.StartNew();
        int value2 = t.Add(100,200);   
        sw.Stop();
        double time2 = sw.Elapsed.TotalMilliseconds;

        sw = Stopwatch.StartNew();
        int value3 = t.Add(100,200); 
        sw.Stop();
        double time3 = sw.Elapsed.TotalMilliseconds;

        Console.WriteLine("{0} - {1}", value1, time1);
        Console.WriteLine("{0} - {1}", value2, time2);
        Console.WriteLine("{0} - {1}", value3, time3);

        sw = Stopwatch.StartNew();
        value1 = t.Sub(100,200);
        sw.Stop();
        time1 = sw.Elapsed.TotalMilliseconds;

        sw = Stopwatch.StartNew();
        value2 = t.Sub(100,200);  
        sw.Stop();
        time2 = sw.Elapsed.TotalMilliseconds;

        sw = Stopwatch.StartNew();
        value3 =  t.Sub(100,200); 
        sw.Stop();
        time3 = sw.Elapsed.TotalMilliseconds;

        Console.WriteLine("{0} - {1}", value1, time1);
        Console.WriteLine("{0} - {1}", value2, time2);
        Console.WriteLine("{0} - {1}", value3, time3);
    }
    static void Main()
    {
        DirectTest();
        DirectTest();
    }
}
prosseek
  • 182,215
  • 215
  • 566
  • 871
  • 3
    Don't think of the first call being 50 times slower, think of the subsequent calls being 50 times *faster*. – Travis Gockel Jun 20 '11 at 21:16
  • @Travis Gockel: True, but in some cases it's that first pass that you really care about. There are of course ways to solve this problem. – Ed S. Jun 20 '11 at 21:20
  • Multiple duplicates, including [Why does the second for loop always execute faster than the first one?](http://stackoverflow.com/questions/1021737/why-does-the-second-for-loop-always-execute-faster-than-the-first-one) – user7116 Jun 20 '11 at 21:20
  • Could be anything, most likely external CPU pressure. It's just a (very) bad benchmark. – H H Jun 20 '11 at 21:20
  • 1
    @Henk Holterman: No, it can't be anything, it is definitely the JIT compilation pass. – Ed S. Jun 20 '11 at 21:21
  • @InBetween: Agreed. Some people... I guess you are supposed to know everything to begin with, but of course, what would such an attitude mean for this forum? – Ed S. Jun 20 '11 at 21:21
  • What happens if you warm up a stopWatch first, before your first test? – agent-j Jun 20 '11 at 21:31

1 Answers1

13

This is because of the Just In Time (JIT) compilation method that is used for .NET apps. The MSIL bytecode is translated to machine code once by the JIT compiler and subsequent executions of that code are much faster because the native version has been generated and cached.

You pay a one time penalty when you run your code, but the JIT compiler can also perform optimizations for the current architecture that it could not be performed if the code were native from the get-go. You can however force a JIT pass by calling RuntimeHelpers.PrepareMethod.

Ed S.
  • 122,712
  • 22
  • 185
  • 265
  • Possible but unlikely to be the whole answer. Easy to test by calling the methods immediately before the first Stopwatch. – H H Jun 20 '11 at 21:23
  • 1
    @Henk Holterman: *"unlikely to be the whole answer."* - Why is that? What else would you suggest? This looks exactly like the JIT pass to me. Do you have any other recommendations? – Ed S. Jun 20 '11 at 21:27
  • @Henk Holterman: Well, I can see his code, so we know there are no threading issues there. Of course Windows is not a RTOS, but those numbers don't lie and are very repeatable. Scheduling issues would not cause such a large difference unless the machine was just completely CPU bound, and in that case we would not see such repeatable numbers. I've spent my fair share of time running down CPU bottlenecks in .NET apps and I see nothing here to indicate that this is anything but the result of the first JIT pass. – Ed S. Jun 20 '11 at 21:32
  • And do you also put Stopwatches around single calls to 2-line methods? – H H Jun 20 '11 at 21:41
  • @Henk Holterman: What difference would that make? If anything the code will execute so fast as to be under the resolution of the stopwatch, but that is not the case here given the `.194ms` result. – Ed S. Jun 20 '11 at 21:44