3

I currently have a server which runs SSIS 2008 and SQL 2008. I want to improve the performance of the server and keep the performance of the server steady when SSIS is running.

I have an SSIS package that runs once per hour on the server and it slows the server down for around 3 minutes. In this 3 minutes some sites can stop responding altogether due to the load on the tables and locking. This process needs to run once per hour to keep stock levels up to date and also pricing so the process is essential.

My question is, will throwing more RAM at the server sort the issue or will I have to resort to putting another CPU in and in the same respect another license of SQL?

It's currently the following spec Quad Core Xeon 2.8 x 1 8gb Ram Windows datacenter 2008 32bit 2 x 7,500 rpm 500gb drives

Proposed spec Quad Core Xeon 2.9 x 1 24gb Ram Windows standard 2008 64bit 2 x 15,000 rpm 300gb sas drives

Do you think this will pose a significant increase in reliability?

EDIT

After some investigation into possible bottle necks it appears that when my SSIS import starts it is reading ~200 million b/min, when its close the end it is reading over 1 billion b/min. The processor is around 20% usage and the RAM is at 53%? I have modified the SSIS slightly to do more items in parallel and it does help a little but the issue still remains? Anyone any ideas?

To note, I have my log file and data file on separate discs. The log is on my E partition and the data is on F which is a separate SAS SAN drive. I'm not sure how the SAS SAN allocates resources but it is a shared SAN which is in the datacenter. Would it be better to fit two SAS drives to the server rather than using a shared resource?

Chris Lomax
  • 173
  • 1
  • 2
  • 8

2 Answers2

3

It's probably an idea to do some performance monitoring, identify the actual bottleneck and address it. RAM and CPU are both equally "important" to a server, and its better to address this sort of thing scientifically rather than just throw a bunch of resources at it to see what sticks.

When you say "will throwing more RAM at the server sort the issue or will I have to resort to putting another CPU in" then the answer isn't one of the other of those two things. The answer is always "It depends on what is making it slow down, measure the performance of the system and find out". It could equally well be storage that is the bottleneck.

Rob Moir
  • 31,884
  • 6
  • 58
  • 89
  • Well I assumed that storage was an issue so I rented 50gb of SAN space on SAS drives but I didn't see anything worth writing home about. It definitely reduced the issue of the sites not responding once per hour but it just means the sites slow down a lot. I did check the performance monitoring and the bottleneck at the time was the hard drives. I really thought that would ease the server. – Chris Lomax May 23 '11 at 14:10
  • 1
    Or even the DB design. Might not have the best indices. – mfinni May 23 '11 at 14:11
  • As mfinni says, Chris, it could even be a software/design issue. It may be (and yes I _have_ seen this one) that rather than having one particular "fault" that needs to be corrected, you have a number of small issues that aren't much by themselves but overall add up to poor performance (maybe you need a *bit* more RAM, maybe your storage isn't organised *quite* right, maybe the processors could be a *bit* faster/need a *bit* more cache, etc) and you will need to work on all these areas combined to see great performance rather than just pushing everything into one area. – Rob Moir May 23 '11 at 14:15
  • Well the issue only raises it's head when the SSIS packages are running. The software itself is tuned to death. I have run SQL profiler on all queries in my db, I make sure all query plans are using the correct indexes etc etc. The issue only really comes around when the packages are running. The package updates around a million rows in the db when it runs – Chris Lomax May 23 '11 at 14:18
  • I hear that, clearly the SSIS job is pushing it over the edge. It may be that this needs a re-design (could you run a few 'lower impact' jobs at a higher frequency to get the same result?), or that you need to capacity plan with this in mind as well as your general SQL Server load (I'm sure you already are trying to do this!). – Rob Moir May 23 '11 at 14:22
  • Well I reduced the batches that the SSIS runs at any one time and it does help but it takes a lot longer to run. Initially the import took 28 seconds, moving this to 10k batch rows, it now takes around 3-4 mins but the server does handle it better. I don't feel like reducing the batches even more is a solution but rather a hack. I was hoping that more ram or two CPUs would help as that means the system could handle either more in memory or more processes. I don't want to invest in it though then it not work – Chris Lomax May 23 '11 at 14:31
  • Chris, more CPU power or more memory may well help. The question is which one - have you compared performance monitoring figures from the "normal" times against these peaks to see if that helps pinpoint the issue? – Rob Moir May 23 '11 at 14:41
  • Well we ran a series of tests on the server with bottle necks and initially it was the hard discs. It went from 1 million reads to 40-50 million reads at the time of the ssis running. This is when we rented the SAN space on SAS discs hoping that the increased rpm would help with the issue. I am now wondering if it's worth bringing that back under my own SAS discs and wondering if the network throughput is causing latency. I cannot effectively monitor these external SAS drives as they are controlled by the host. RAM and processor remained within 80% of their potential usage though – Chris Lomax May 23 '11 at 14:46
  • Without guessing, I will run some more tests and then if you wouldn't mind helping me take a look over them I would be grateful. Do you recommend any tools or just the built in sql monitor? – Chris Lomax May 23 '11 at 14:53
  • sql monitor and general performance monitor, or whatever its called these days on windows 2008 are all I use... – Rob Moir May 23 '11 at 15:00
  • After doing some investigation, it appears that the bottle neck is still the hard disc? When the update starts it is reading ~200 million b/min, when its close the end it is reading over 1 billion b/min. The processor is around 20% usage and the RAM is at 53%? I have modified the SSIS slightly to do more items in parallel and it does help a little but the issue still remains? Anyone any ideas? – Chris Lomax May 24 '11 at 12:24
  • Sorry, I must add that reads from this db on normal operation are around 1 million a min – Chris Lomax May 24 '11 at 12:25
  • more spindles? Are the SQL log and data files on different physical volumes? (it might be worth adding your numbers to your question in an edit btw, Anything to push it back up to the top of the page and get more eyes and ideas paying attention to it!) – Rob Moir May 24 '11 at 13:44
1

In most cases, RAM. Not very often with today's servers are you CPU-bound, which is why virtualization works so well, but demands a ton of RAM.

KCotreau
  • 3,381
  • 3
  • 20
  • 24
  • 1
    While this is true in general, Robert makes the very good point that for this scenario, Chris shouldn't guess. He can run SQL performance tools (profiler, tuning adviser) to see what his actual bottleneck is. It could even be something like missing indices, and no new hardware is needed. – mfinni May 23 '11 at 14:11