2

How a call to SqlCommand.ExecuteReader take less time to complete than the SQL batch itself as seen in SQL Profiler?

I have the following simple code running in a console app which calls SqlCommand.ExecuteReader, which I time using a Stopwatch object:

var swQueryTime = new Stopwatch();

        var conn = new SqlConnection("Data Source=.;Initial Catalog=master;Trusted_Connection=True;");
        conn.Open();
        string sql = string.Format(@"select * from sys.dm_os_memory_clerks; select * from sys.dm_os_performance_counters ");

        for (int i = 0; i < 10; i++)
        {                
            var comm = new SqlCommand(sql, conn);
            swQueryTime.Restart();

            var dr = comm.ExecuteReader();

            swQueryTime.Stop();

            Console.WriteLine("ElapsedMilliseconds: {0}", swQueryTime.ElapsedMilliseconds);

            dr.Close();
            comm = null;
        }

On average the SQL batch duration is 4 times longer than what is reported on the .Net side.

I have checked profiler is specifically reporting milliseconds.

I am not using the async version of SqlCommand.ExecuteReader.

Profiler duration is not a sum of all time across multiple threads/cores from what I have read and verified using profiler start and end times.

Ideas appreciated.

  • +1: for checking the Sql Profiler Milli/Micro-second thing. – RBarryYoung Mar 06 '13 at 13:24
  • Have you checked the duration of the Sql Statement itself in the Profiler? It's always somewhat less than the Batch Duration, usually the difference is slight, but sometimes it is significant. – RBarryYoung Mar 06 '13 at 13:26
  • Yes I have checked the statement duration - the 2 statements sum to the batch duration which is expected. – user2139987 Mar 06 '13 at 13:43

1 Answers1

2

Because you have only got timings to the very start of the batch. If you consume the data too, the times will probably line up:

using(var dr = comm.ExecuteReader()) {
    do {
        while(dr.Read()) {}
    } while (dr.NextResult());
}

Incidentally, there are also a few SET options that can change the performance of some queries - not sure that should apply here, but it can significantly impact tables that have calculated+persisted values: if the SET values aren't compatible with the settings when the column was created, it will need to re-run the calculation per row. This is especially noticeable for calculated+persisted+indexed columns if you are filtering on that column - it will have to do a table-scan (or similar) rather than an index scan / index seek.

Marc Gravell
  • 1,026,079
  • 266
  • 2,566
  • 2,900
  • Yes this makes profiler and ExecuteReader agree on timings - thank you! My reading on ExecuteReader made me believe it waited for all data to be returned before moving on to the next statement - do you know if this alternative behaviour is documented anywhere? – user2139987 Mar 06 '13 at 13:53
  • @user2139987 I haven't checked, but fundamentally it returns TDS which is a stream - you can start reading data long before you get to the end of the stream. Certainly it is explicitly the case when using `SequentialAccess` mode, since it even reads individual columns as a stream. You can also see the side effects of this streaming behavior if you try to access 'out` / `ref` parameters before the end of the data - they are returned *last*. You can also do things like raise a sql-error after a long select - if you close the reader before the end, you might never see the error. – Marc Gravell Mar 06 '13 at 13:57