1

If I would like to benchmark how different table definitions affect row insertion speed in SQL Server, I guess it's not sufficient to just time transaction from BEGIN to COMMIT: this only measures the time spend to append INSERTs to the (sequential) log. Right?

But the real I/O hit comes when the INSERTs are actually applied to the real table (a clustered index which might be slightly reorganized after the INSERTs). How can I measure the total time used, all inclusive? That is, the time for all the INSERTs (written to log) + the time used for updating the "real" data structures? Is it sufficient to perform a "CHECKPOINT" before stopping the timer?

someName
  • 1,275
  • 2
  • 14
  • 33

2 Answers2

1

Due to lack of response I will answer this myself.

As far as I can see in various documentation, I will reach all related disk activity induced by a query by issuing a CHECKPOINT. This will force-write all dirty pages to disk.

If nothing but the query to be measured is executed, the only dirty pages will be the ones touched by the query. The experiments performed seem to support this "theory".

Lieven Keersmaekers
  • 57,207
  • 13
  • 112
  • 146
someName
  • 1,275
  • 2
  • 14
  • 33
  • But this is a falalcy because the number is artificially low - unless you maintain high inserts / updates for prolonged periods, the checkpoints wont slow down your database normally too much. Much less than in your test. – TomTom Aug 08 '11 at 11:29
0

SET STATISTICS TIME ON will give you running and CPU times in MS for each statement you run after setting it

edit: Using the query below you can find out exactly how many pages are dirty in the buffer pool at the time of execution as well as their size in MB and configured max/min memory on server and totals.

SELECT
ISNULL((CASE WHEN ([database_id] = 32767) THEN 'Resource Database' ELSE DB_NAME (database_id) END),'Total Pages') AS [Database Name],
SUM(CASE WHEN ([is_modified] = 1) THEN 1 ELSE 0 END)  AS [Dirty Page Count],
SUM(CASE WHEN ([is_modified] = 1) THEN 0 ELSE 1 END)  AS [Clean Page Count],
COUNT(*) * 8.0 / 1024.0 [Size in MB], a.value_in_use [Min Server Memory], 
b.value_in_use [Max Server Memory]
FROM sys.dm_os_buffer_descriptors
INNER JOIN sys.configurations a on a.configuration_id = 1543
INNER JOIN sys.configurations b on b.configuration_id = 1544
GROUP BY [database_id],a.value_in_use,b.value_in_use WITH CUBE
HAVING A.value_in_use IS NOT NULL AND B.value_in_use IS NOT NULL
ORDER BY 1;
Thiago Dantas
  • 670
  • 6
  • 16
  • That is not sufficient. I need total execution time all inclusive, that is, inclusive the later writing of dirty buffer pages to disk. – someName Aug 08 '11 at 09:51
  • The writing of dirty pages to disk can happen at any moment, not specifically **after** you run the query. If you really want to profile checkpoint you can issue a `CHECKPOINT` right after your insert and STATISTICS TIME will also give you time. IF you don't want to add up the times yourself, you can catch this with profiler, profiling the batch-completed event, but you still need to issue the CHECKPOINT yourself – Thiago Dantas Aug 08 '11 at 14:37