1

I'm building a new dev computer. It will be running a few VMWare Worksation virtual machines. I was advised on Serverfault to use Raid10 for performance. Raid 10 uses 4 disks.

I contacted my supplier who suggested a gigabyte X58A motherboard and 4 Western Digital Caviar black 6Gb/s disks.

I have checked the spec for the X58A board, however, and it says: SATA 3Gb/s: RAID 0, RAID 1, RAID 5, and RAID 10 SATA 6Gb/s: RAID 0, and RAID 1.

I'm losing half the bandwidth because I'm forced to use SATA2! What should I do?

Avi
  • 139
  • 6

1 Answers1

3

You are losing half the bandwidth on each drive interface but the WD Caviar Black is a 7.2k mechanical drive and it can't saturate a 3Gb/s SATA 2 interface. Since the X58 supports 8 3Gb/s SATA 2 drives I don't think you're actually losing anything.

Edited to add clarifications for your comment
The main reason these drives have SATA 3 interfaces is because of marketing, 6Gb/sec is faster than 3Gb/sec and it makes for a nice differentiator for the drives for the guys putting the sales pitch together but it doesn't mean its actually particularly useful. There are some edge case advantages for some extreme workloads but almost all of those will be handled better by OS level caching which negates the benefits for these drives for the most part, as you will find if you dig out any benchmarks comparing their performance on 3Gb/sec interfaces vs 6Gb/sec ones. For sustained IO the spinning media of a 7200 rpm current gen Terabyte size hard drive will never exceed about 1Gb/sec. SSD's are quite a different beast and the faster SATA 3 interfaces can be significantly beneficial with them.

RAID 10 reads on a 4 drive RAID 10 pack will hit 4x the bandwidth of a single drive but the X58 board you list supports 8 SATA 2 channels. Since SATA connections are point to point that means it allows all drives to transfer data back to the controller at their maximum transfer rates concurrently. The X-58's controller might not be able to realize a full 4x improvement in transfer rate for the RAID pack over a single drive but if that is the case it will be because of the capabilities of the controller's RAID processing circuitry not the SATA interfaces. A dedicated hardware RAID card would certainly deliver but it would cost a bit more.

Helvick
  • 20,019
  • 4
  • 38
  • 55
  • Thank you for your answer but, sorry, I don't get it. 1. If a Caviar Black can't satuate a 3Gb/s Sata 2, why did they make it support Sata 3 6Gb/s? 2. AFAIR Raid 10 reads can be 4 times as fast as regualr disks. So even if 1 disk can't, won't the RAID saturate the Sata 2? – Avi May 15 '10 at 20:45
  • a RAID array might saturate a single SATA2 link; but you're using 4. IOW, the higher bandwidth would help if you use it to link into a SATA multiplier (similar to a hub, or switch) and form there several disks. for the common point-to-point, it's (still) overkill. – Javier May 16 '10 at 03:03
  • Thank you - this clarified it for me. If I may ask - suppose I also want an SSD and the board supports 6 x SATA 3Gb/s and RAID 10 on the South Bridge, AND has a Marvell 9128 chip supporting up to 2 SATA 6Gb/s devices, then the RAID should go to the southbridge and the SSD to the Marvell? – Avi May 16 '10 at 12:41
  • In general you would want to separate the RAID10 pack and the SSD as much as possible and that might be the case here but there's at least one review out there that indicates that the Marvell 9128 6G SATA 3 doesn't actually outperform the ICH10 http://benchmarkreviews.com/index.php?option=com_content&task=view&id=413&Itemid=38&limit=1&limitstart=7 . Your mileage may vary and firmware updates may make a major difference so if you do plan to do this test both options. Bleeding edge technology often has teething issues like this. – Helvick May 16 '10 at 13:21