I am trying to figure out the number of 8K page i/o capacity for a disk subsystem. The drives are SATA 7200 RPM - 4 drives in a RAID-5 configuration. I am not sure about the SCSI controller but the server is about 5 years old.
3 Answers
Another thing you'll need to know is the ratio of sequential vs. random I/O requests. That can affect I/O op speed by a good degree. For rotational media, 100% random requests is your lower bound for I/O ops, and 100% sequential is your upper bound.
Also, since you'll be working with RAID5, your read/write percentage will also affect I/O Ops. This can be greatly affected by your RAID card, so there aren't many hard and fast rules of thumb other than 'writes will be slower than reads', and even THAT can be spoiled by intelligent caching on the RAID card.
7.2K RPM drives will reach I/O saturation at some point, though it is possible your RAID card will hit CPU saturation well before then when handling lots of writes.
The only way to know for sure is to test. As Evan said, iometer has a long track-record. I've also used iozone to good effect. It's not as advanced as iometer, but it's a bit simpler to use.

- 133,124
- 18
- 176
- 300
-
+1 - Total agreement. Accurately modeling the read/write ratio, random vs. sequential access ratio, and cache hit percentage (if you can get it) in the test environment are necessary to pull off a realistic test. – Evan Anderson Jun 09 '10 at 22:53
I'm a big fan of Iometer. It is getting a bit long in the tooth w/o major updates, but see it used in recently published benchmarks, and that gives me confidence in its numbers.

- 141,881
- 20
- 196
- 331
I've done this kind of testing with sysbench. What I did is create a large test file, several times larger than system memory, and only tested random IO writes. That should give you a useful worst-case metric. I'd do all the sizing around that test. Real-world performance should hopefully be better than that due to caching and a mix of read and write traffic as well as anything doing sequential IO. I would definitely optimize around IOPS as I've found that IO bottlenecks are much harder to fix than space issues.
If you get the chance I'd take a look at reconfiguring your 4-disk RAID 5 into a RAID 10 as that should generally give better write IO performance. You could test it both ways to see exactly how your hardware performs in both conditions.
My rule of thumb is that 7.2k drives can add about 150 IOPS, 10K about 200 and 15k up to 250 although that might be a little optimistic. Each mirror gives you added IOPS for reads and each stripe added IOPS for writes. So for example a 4 disk RAID 10 with 7.2k disks could give you up to 300 IOPS of writes worst case but no more. RAID 5 is more complicated to estimate as it can be variable depending on how much data is being written as well as any shortcuts or optimizations in the implementation. In a normal case an 8k write on a 64k stripe would ultimately cause 5 IOPS on a 4 disk RAID, three IOPS to read each of the stripes and one each for the data stripe being updated and the parity stripe. I've found that write IOPS on RAID 5 is about half of the same number of disks in RAID 10 so your 4 disk RAID 5 probably has a worst case of 150 IOPS for random write IO, no better than a single disk.

- 1,833
- 11
- 9