0

I have a SQL Server Agent Job that rebuilds two indexes that is scheduled to run daily. The reason I am doing this is to improve performance of a stored procedure.

When the agent job is run from a schedule it does not affect the performance of the stored procedure, however if I run the job manually it does.

Looking at the log for the job both the manual run and scheduled run complete without errors and even take similar time to complete. The log even indicates that they are both run as the same user.

What am I missing from scheduled implementation or how is another way to schedule this?

T-SQL for job:

USE [TestingDB]
GO

ALTER INDEX [_dta_index_TestingOrders_randomIndex] 
ON [dbo].[Orders] 
REBUILD PARTITION = ALL 
    WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON)
GO
marc_s
  • 732,580
  • 175
  • 1,330
  • 1,459
khaas
  • 1
  • 1
    A daily index rebuild is a sign of things going wrong for sure. If a daily index rebuild is solving the performance problem then I would suggest you are fixing the symptom daily and not fixing the problem which causing this to begin with. – Sean Lange Aug 12 '19 at 15:58

1 Answers1

0

This could simply be an artefact of the timing of the rebuilds vs the time you check. I am assuming that your daily rebuild is run, say, at 1am, but the performance test is not run until you are in at 8am. In this case, other scheduled tasks (especially other maintenance tasks) and general DB use can be run after the rebuild, but before the tests.

On the other hand, you probably test performance immediately after a manual rebuild. This results in different environments for your testing.

When SQL Server runs a rebuild, it will attempt to perform this in memory for enhanced speed. This will result in a large portion of the table (possibly all of it) being cached in memory, and accesses will be very quick at that point. However, as other processes work, parts of that table will be removed from memory to be used by other tables for caching their data. As more and more data is removed from memory for the first table, overall performance will degrade. Thus, testing after several hours can have significantly different results.

Also, overall load on the server can have a big effect on the testing. Even if the performance test is scheduled immediately after the automatic rebuild, if there are other processes running at that time (other index rebuilds, overnight data summarization, large reports, etc.) then performance may be significantly different from what you see with your manual rebuilds.

I have experienced the pattern you are seeing before, and it primarily came down to exactly what I talk about here - memory available for caching data. Performing some cache analysis to see what data is cached for each database can rule this situation out, as long as the cache analysis is run and recorded at intervals, so that you can see the caching over time.

Laughing Vergil
  • 3,706
  • 1
  • 14
  • 28
  • 1
    Thanks for the tips. I will do some cache analysis. I did a performance test right before the job was scheduled, right after the job ran as scheduled, and after the job was ran manually. Performance was good before the job ran, bad after the job ran as scheduled and back to good after the job was ran manually. – khaas Aug 13 '19 at 13:23
  • Hm. Interesting results.... unfortunately. Are there other jobs running simultaneously? – Laughing Vergil Aug 13 '19 at 16:03