13

I want to know what are various methods by which I can monitor which of my stored procedure's and SQL queries are taking more time on various components(CPU cycle, scan time etc.) than already set threshold value.

I want it to be logged as well. Whenever any user uses my site and calling some procedure, I want to make a log of all procedures crossing my threshold.

Is it possible to do it with SQL queries or procedures. Do we have some procedures for this. Any SQL tools or any external tool, can be paid(with trial) or free. I want to try them on my database.

Karthik Venkatraman
  • 1,619
  • 4
  • 25
  • 55
Shantanu Gupta
  • 20,688
  • 54
  • 182
  • 286

8 Answers8

14

You should be able to do this using Dynamic Management Views (DMVs) in particular you are probably going to be most interested in the exec_query_stats view which maintains execution statistics on all queries (CPU time, Physical / Logical reads etc...) grouped by execution plan.

Also see this excellent article which includes a sample query for viewing plan statistics, and goes into a lot more detail on the subject:

Finally, if you want to trace / record excessively long running queries, then you might want to consider leaving an SQL server profiler trace running at all times, with a filter on execution time set to some high figure (e.g. > 1000 ms). You can either use the SQL server profiler windows application, or you can create the trace using T-SQL have have it log to a table in the database instead:

This has the benefit of telling you exactly what query took exactly how long, when and what the parameters to that query were (holy SQL Batman!)

The performance implications of running this trace on loaded databases is in fact very small - I know of surprisingly critial applications which have these traces running as a matter of routine in order to be able to quickly diagnose performance issues (and it does help a lot). The key is in choosing a "large" execution time which is large enough to not swamp the log, yet small enough to pick up enough long running queries to be useful.

Another trick that has been used in the past when having performance issues was to leave an unfiltered SQL server trace running for a short period of time (1 min or so) on a loaded SQL server (it really does have surprisingly little effect, you just get swamped with logs)

I also heartily recommend the Microsoft SQL Server internals books on this subject - it is very technical, however its brilliant because it covers not only these sorts of diagnosis tools, but also what they actually mean

Justin
  • 84,773
  • 49
  • 224
  • 367
  • There is a whole wealth of information on Google on how to use Dynamic Management Views to extract all sorts of performance information from SQL server. – Justin Aug 16 '10 at 14:30
  • 1
    One MAJOR catch on the DMV - it only works so long as the item stays in the cache. Busy system, might not stay in the cache. We periodically capture the data every few hours, and then use a MERGE to make sure that _any_ calls are saved in our roll-up table. But don't trust that it'll stay in the cache & DMV. So, as Justin said, you're limited to a barebones trace or barebones XE. – mbourgon Apr 24 '13 at 13:55
4

If you have SQL 2005 + you can run the standard reports in management reports Right Click on database in management studio and select Object execution statistics - this only works since last reboot though. You can also query that using DMV's sys.dm_exec_query_stats

If you are on an older version of SQL use SQL Server Profiler this will show you execution time, reads, cpu, writes and you can filter on thresholds for any of these.

u07ch
  • 13,324
  • 5
  • 42
  • 48
3

These are what I normally use. I can't remember where I got them from, but they work a treat. This one's for longest-running queries:

SELECT TOP 100
    qs.total_elapsed_time / qs.execution_count / 1000000.0 AS average_seconds,
    qs.total_elapsed_time / 1000000.0 AS total_seconds,
    qs.execution_count,
    SUBSTRING (qt.text,qs.statement_start_offset/2, 
         (CASE WHEN qs.statement_end_offset = -1 
            THEN LEN(CONVERT(NVARCHAR(MAX), qt.text)) * 2 
          ELSE qs.statement_end_offset END - qs.statement_start_offset)/2) AS individual_query,
    o.name AS object_name,
    DB_NAME(qt.dbid) AS database_name
FROM sys.dm_exec_query_stats qs
    CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) as qt
    LEFT OUTER JOIN sys.objects o ON qt.objectid = o.object_id
WHERE qt.dbid = DB_ID()
ORDER BY average_seconds DESC;

...and this one gives a list of queries using the most I/O:

SELECT TOP 100
    (total_logical_reads + total_logical_writes) / qs.execution_count AS average_IO,
    (total_logical_reads + total_logical_writes) AS total_IO,
    qs.execution_count AS execution_count,
    SUBSTRING (qt.text,qs.statement_start_offset/2, 
         (CASE WHEN qs.statement_end_offset = -1 
            THEN LEN(CONVERT(NVARCHAR(MAX), qt.text)) * 2 
          ELSE qs.statement_end_offset END - qs.statement_start_offset)/2) AS individual_query,
    o.name AS object_name,
    DB_NAME(qt.dbid) AS database_name
FROM sys.dm_exec_query_stats qs
    CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) as qt
    LEFT OUTER JOIN sys.objects o ON qt.objectid = o.object_id
WHERE qt.dbid = DB_ID()
ORDER BY average_IO DESC;

Hope these are of some help!

Billious
  • 2,563
  • 4
  • 21
  • 26
  • Proposed source: https://web.archive.org/web/20080508155149/http://msdn.microsoft.com/en-us/magazine/cc135978.aspx thanks to: https://gist.github.com/huguogang/2159701 – user423430 Oct 26 '17 at 13:44
3

From the winning answer to a recent SO question, this will give you the top 50 most used procs and the statements in the procs. You can change the TOP 50 to TOP 1 or any other number you'd like to see.

SELECT TOP 50 *
FROM ( SELECT COALESCE(OBJECT_NAME(s2.objectid), 'Ad-Hoc') AS ProcName
           ,execution_count
           ,s2.objectid
           ,( SELECT TOP 1 SUBSTRING(s2.text,
                                     statement_start_offset / 2 + 1,
                                     ( ( CASE WHEN statement_end_offset = -1
                                              THEN ( LEN(CONVERT(NVARCHAR(MAX), s2.text))
                                                     * 2 )
                                              ELSE statement_end_offset
                                         END ) - statement_start_offset )
                                     / 2 + 1)
            ) AS sql_statement
           ,last_execution_time
        FROM sys.dm_exec_query_stats AS s1
        CROSS APPLY sys.dm_exec_sql_text(sql_handle) AS s2
     ) x
WHERE sql_statement NOT LIKE 'SELECT * FROM(SELECT coalesce(object_name(s2.objectid)%'
    AND OBJECTPROPERTYEX(x.objectid, 'IsProcedure') = 1
    AND EXISTS ( SELECT 1
                    FROM sys.procedures s
                    WHERE s.is_ms_shipped = 0
                        AND s.name = x.ProcName )
ORDER BY execution_count DESC

You could run this periodically to see the changes over time. Put it in a stored procedure and insert the results into a log table. Set up a scheduled job to run that sproc at fixed intervals.

Community
  • 1
  • 1
DOK
  • 32,337
  • 7
  • 60
  • 92
2

You could use SQL Server Profiler:

  1. Create a new Trace.
  2. Select the server you want
  3. Select template 'blank'
  4. In the Events Selection tab, select 'SP:Completed' (under Stored Procedures)
  5. Turn on the columns you want
  6. In Column Filters, select Duration and put in a value for 'Greater than or equal' (take note of the units described above; it's either milliseconds or microseconds). This is your 'threshold'
  7. Click 'Run'

Notes:

  • This excludes time to load the SP; if your SPs are very large, that will add to the above times
  • I've occasionally had problems with the Duration value; if you have issues with it, you may want to datediff the end/start times
Geoff
  • 8,551
  • 1
  • 43
  • 50
1

On the paid tool side, I'd highly recommend Ignite for SQL Server from Confio Software. You can download a free trial and take it for a spin. We've been using it for about 2 years now and are very happy with the results. In our case, we picked off several "low hanging fruit" stored procedures that we optimized for quick wins just in the time we ran the demo. That was enough to convince us that it was a worthwhile investment.

Joe Stefanelli
  • 132,803
  • 19
  • 237
  • 235
1

Also note that in SQL 2008 there is a new DMV specifically for procedures: sys.dm_exec_procedure_stats. This view will give you cumulative stats for the entire procedure, including time and CPU spent doing non-query related work such as WAITFOR, function calls and conditional logic.

Please do be cautious of running profiler traces on your production servers. Tracing introduces overhead on the server, even minimal tracing. Depending on how busy your server is, this could actually be noticeable. Adding a filter to the trace definition does not reduce this overhead, SQL Server still needs to process each event fired to determine whether or not it meets the filter criteria. Filtering only serves to reduce the size of the trace file.

Another tool you may want to look into if you're dealing with SQL 2008 is the Data Collector and Management Data Warehouse. This is a tool that ships with SQL Server (no extra cost) that will allow you to collect query statistics on a regular basis and upload them to a data warehouse. It comes with built-in collectors and reports that are quite comprehensive for the most common monitoring needs.

http://msdn.microsoft.com/en-us/library/dd939169(SQL.100).aspx

Of all the things mentioned above, I think your best bet is sys.dm_exec_query_stats as it is low-overhead, free, and available on both SQL 2005 and SQL 2008.

Pam Lahoud
  • 1,095
  • 6
  • 9
0

If you were using SQL Server 2008, you can experiment with Resource Governor which allows you to throttle and monitor CPU and memory workload.

For logging purposes, you can use Data Collection. You can create a custom collection set that gather snapshots from any DMVs and upload them to a performance data warehouse.

Idera has charting tools that display resource consumption per query. I think it allows user to set threshold and email alerts too.

Ching Chang
  • 75
  • 1
  • 8