2

I have a query in SQL Server 2019 that does a SELECT on the primary key fields of a table. This table has about 6 million rows of data in it. I want to know exactly how fast my query is down to the microsecond (or at least the 100 microsecond). My query is faster than a millisecond, but all I can find in SQL server is query measurements accurate to the millisecond.

What I've tried so far:

  • SET STATISTICS TIME ON

    This only shows milliseconds

  • Wrapping my query like so:

    SELECT @Start=SYSDATETIME() 
    SELECT TOP 1 b.COL_NAME FROM BLAH b WHERE b.key = 0x1234 
    SELECT @End=SYSDATETIME(); 
    SELECT DATEDIFF(MICROSECOND,@Start,@End)
    

    This shows that no time has elapsed at all. But this isn't accurate because if I add WAITFOR DELAY '00:00:00.001', which should add a measurable millisecond of delay, it still shows 0 for the datediff. Only if I wat for 2 milliseconds do I see it show up in the datediff

  • Looking up the execution plan and getting the total_worker_time from the sys.dm_exec_query_stats table.
    Here I see 600 microseconds, however the microsoft docs seem to indicate that this number cannot be trusted:

total_worker_time ... Total amount of CPU time, reported in microseconds (but only accurate to milliseconds)

I've run out of ideas and could use some help. Does anyone know how I can accurately measure my query in microseconds? Would extended events help here? Is there another performance monitoring tool I could use? Thank you.

Dale K
  • 25,246
  • 15
  • 42
  • 71
andrewpw
  • 33
  • 6
  • 1
    `WAITFOR DELAY '00:00:00.001'` would equate to `WAITFOR DELAY '00:00:00.000'` as the time period is converted to a `datetime`. If you use a value that can be be held in a `datetime` you *do* get a value. – Thom A Apr 27 '21 at 13:50
  • 1
    Oh I see what you mean. So when I do '00:00:00.001' it's rounding down to '00:00:00.000' but when I do '00:00:00.002' it's rounding up to '00:00:00.003'. So if that's the case, does that mean my wrapping the query works, but that my query takes less than a microsecond? – andrewpw Apr 27 '21 at 17:27
  • 1
    Please [edit] your question to tell us what you want to do with this information. Considering how flakey the results are from trying to measure sub-millisecond performance on networked server software (SQL Server) with hundreds of millions of lines of code, maybe there's another approach to solving your problem. – O. Jones Apr 29 '21 at 13:33

1 Answers1

2

This is too long for a comment.

In general, you don't look for performance measurements measured in microseconds. There is just too much variation, based on what else is happening in the database, on the server, and in the network.

Instead, you set up a loop and run the query thousands -- or even millions -- of times and then average over the executions. There are further nuances, such as clearing caches if you want to be sure that the query is using cold caches.

Gordon Linoff
  • 1,242,037
  • 58
  • 646
  • 786
  • 2
    Thanks. Yeah I considered this but I didn't know how caching would impact my measurement. If I rerun the same query over and over in a loop, isn't it going to look a lot faster than it actually is because SQL Server holds the data in memory? – andrewpw Apr 27 '21 at 16:43