2

The current system in question is running SBS 2003, and is going to be migrated on new hardware to SBS 2008. Currently I'm seeing on average 200-300 disk transfers per second total across all the arrays in the system. The array seeing the bulk of activity is a 6 disk 7200RPM RAID 6 and it struggles to keep up during high traffic times (idle time often only 10-20%; response times peaking 20-50+ ms). Based on some rough calculations this makes sense (avg ~245 IOPS on this array at 70/30 read to write ratio).

I'm considering using a much simpler disk configuration using a single RAID 10 array of 10K disks. Using the same parameters for my calculations above, I'm getting 583 average random IOPS / sec.

Granted SBS 2008 is not the same beast as 2003, but I'd like to make the assumption that it'll be similar in terms of disk performance, if not better (Exchange 2007 is easier on the disk and there's no ISA server).

Am I correct in believing that the proposed system will be sufficient in terms of performance, or am I missing something? I've read so much about recommended disk configurations for various products like Exchange, and they often mention things like dedicating spindles to logs, etc. I understand the reasoning behind this, but if I've got more than enough random I/O overhead, does it really matter? I've always at the very least had separate spindles for the OS, but I could really reduce cost and complexity if I just had a single, good performing array.

So as not to make you guys do my job for me, the generic version of this question is: if I have a projected IOPS figure for a new system, is it sufficient to use this value alone to spec the storage, ignoring "best practice" configurations? (given similar technology, not going from DAS to SAN or anything)

LapTop006
  • 6,496
  • 20
  • 26
Boden
  • 4,968
  • 12
  • 49
  • 70

1 Answers1

3

I would not ignore best practices in terms of different spindles. Here are my reasons. For the OS, having the OS on a different spindle has benefits in terms of backups, restorals and upgrades that have nothing to do with I/O performance. In terms of logs, there are two reasons for putting Exchange stores and logs on different spindles. The first is backups, if they are on separate spindles it takes a more catastrophic event to take out both of them which is a good thing. The second is somewhat more subtle. Exchange essentially alternates between writing to the log files and writing to the datastore (this is an oversimplification but good enough for now). If those two things are on the same spindle you force the OS to alternate between them which means that the disk spends a lot more time in the time consuming seek state and much less time in a consecutive read/write state. Essentially the disk is flailing around trying to find the right sectors instead of actually doing anything and it can greatly reduce the real world performance of a disk. This is also why you get better performance out of a disk after you have performed a defrag.

Catherine MacInnes
  • 1,958
  • 11
  • 15
  • +1 - Having suffered catastrophic losses of transaction log or database spindles in various failure events and seen the ESE recover splendidly once the failed spindle was replaced and the appropriate backup was restored, dedicating spindles per Microsoft's recommendations is a good idea. Catherine's statement re: flailing can best be summed up as: When you mix sequential IO (transaction log writes) and random IO (database read / writes) onto the same spindle it all becomes random IO. You have to assume that it's all random IOPS at that point, rather than thinking about sequential IOPS. – Evan Anderson Jan 09 '10 at 05:02
  • Interesting. Isn't there a benefit, though, to thinking in terms of random performance? Sequential performance is unpredictable when you've got a lot of processes hitting the disk plus fragmentation. – Boden Jan 12 '10 at 16:34