I have a query in SQL Server 2019 that does a SELECT on the primary key fields of a table. This table has about 6 million rows of data in it. I want to know exactly how fast my query is down to the microsecond (or at least the 100 microsecond). My query is faster than a millisecond, but all I can find in SQL server is query measurements accurate to the millisecond.
What I've tried so far:
SET STATISTICS TIME ON
This only shows milliseconds
Wrapping my query like so:
SELECT @Start=SYSDATETIME() SELECT TOP 1 b.COL_NAME FROM BLAH b WHERE b.key = 0x1234 SELECT @End=SYSDATETIME(); SELECT DATEDIFF(MICROSECOND,@Start,@End)
This shows that no time has elapsed at all. But this isn't accurate because if I add
WAITFOR DELAY '00:00:00.001'
, which should add a measurable millisecond of delay, it still shows 0 for the datediff. Only if I wat for 2 milliseconds do I see it show up in the datediffLooking up the execution plan and getting the
total_worker_time
from thesys.dm_exec_query_stats
table.
Here I see 600 microseconds, however the microsoft docs seem to indicate that this number cannot be trusted:
total_worker_time ... Total amount of CPU time, reported in microseconds (but only accurate to milliseconds)
I've run out of ideas and could use some help. Does anyone know how I can accurately measure my query in microseconds? Would extended events help here? Is there another performance monitoring tool I could use? Thank you.