1

I have a 16-port Areca ARC-1260D PCIe RAID card and I was planning to use it as one large array (either RAID 10 or RAID 6). I didn't think about if it was possible to split up the RAID card into two (or more) different RAID arrays. The card in question supports multiple RAID selection.

Is there a large performance drop off (or any other caveats) in using multiple RAID selections off a single adapter? Typically, I've used RAID adapters for one single array at a time so I'm not sure if it's wise/unwise to use multiple RAID sets on a single adapter. Initially I was planning to use the entire array for VMs for XenServer, but now with this option, I'm thinking of making one array for the VMs, another for simple file storage.

Edit: This is for SATA, not SAS. Initially I was looking to fill the array with 1.5TB SATA disks, but the price for 2TB disks has fallen dramatically and I'm thinking of having two RAID arrays on the card; 1 array with 6-8 1.5TB disks, the other with 6-8 2TB disks.

osij2is
  • 3,885
  • 2
  • 24
  • 31

3 Answers3

3

There's a few reasons you might see a minor performance drop by having multiple RAID arrays - firstly you're likely to be splitting the cache to some degree but that shouldn't be particularly impacting. You're also more likely to introduce queue/bus contention but again that shouldn't make much of a difference. What will have a bigger impact is that you'll have less disks in the actual array which is likely to have a much bigger impact overall.

Chopper3
  • 101,299
  • 9
  • 108
  • 239
  • While more spindles for performance is usually a good rule of thumb, more disks isn't always better depending on your performance profile -- a larger stripe width means higher latency on partial writes and it may prevent you from having a stripe width equal to your write size (4k+1 disks on RAID-5, or 4k+2 disks on RAID-6, is the sweet spot here). – jgoldschrafe Nov 03 '10 at 17:14
  • what, as opposed to using R10? I'm one of guys on here that hates R5/6 entirely. – Chopper3 Nov 03 '10 at 17:25
  • +1: Thanks for the input, Chopper3. Off-topic: why do you *hate* R5/6 entirely? I have no real feelings one way or the other, I'm just interested to know your reasons (I'm assuming reasons beyond small random write issues on R5/6). – osij2is Nov 03 '10 at 20:09
  • Basically because I don't trust it, especially R5 - if you lose a disk you're open to instant array death until the array is rebuilt, which can take DAYS with these cheapo large SATA disks, it exposes you too much, especially as if you lose one disk you're inherently more likely to lose another of the same type and history soon. R6 is just too slow for writes. – Chopper3 Nov 03 '10 at 20:34
  • I can understand the array rebuilding process being a bitch. Now, I haven't had that many problems with R6 in terms of performance for writes, but then again, most of my arrays are R10. I prefer R5/6 just for available storage, not for performance reasons. R10 is *often* preferred, but money is always a concern. – osij2is Nov 03 '10 at 21:28
  • I have to admit that I use R6 for storing short-term (<1 day) backups. – Chopper3 Nov 03 '10 at 21:41
2

You're not going to see any difference with RAID-10. With RAID-5 and RAID-6 you might theoretically see an indiscernibly small difference in write throughput when writing heavily to both LUNs at the same time, depending on Areca's implementation. What you're concerned with is essentially premature optimization, and you should probably be focusing on these general rules instead before you turn to vendor-specific implementation quirks:

  • Make sure your write-through cache is functioning properly. A good write-through cache accounts for a huge amount of your write performance, especially with random writes, in striping with parity.
  • Understand partition alignment. I believe that on Linux, most of the graphical partition tools (e.g. GParted) will align your partitions by default, but fdisk and parted won't. This leads to boundary crossings, which kill your performance by potentially turning a single write operation into several reads and several writes.
  • When you write in RAID-5 or RAID-6, the controller needs to read data from the entire stripe in order to recompute parity. Understand the implications of segment sizing and how it relates to your data, especially in terms of partial writes (which require reads before a portion of the block can be modified). In general, the larger your stripe width, the greater your sequential read throughput, but the more data that needs to be read when recomputing parity on write. The smaller your stripe width, the better the possibility that you aren't making partial writes, and don't need to recompute parity at all, but the more disks you need to fulfill I/O requests. It's a trade-off and you really need to understand your application.
  • Listen to your vendor's recommendations regarding application performance. Areca allows you to create multiple RAID arrays in a volume set -- take advantage of this functionality if your applications demand I/O performance. You can create multiple RAID volumes with different stripe widths.
jgoldschrafe
  • 4,395
  • 18
  • 18
  • +1: Thanks for partition alignment. Granted, I'll be using *Citrix* XenServer, I'll definitely keep that point in mind. Would you recommend against using R5/6 for VMs assuming the VMs are used more for reading rather than writing? – osij2is Nov 03 '10 at 20:11
  • Given the hardware, RAID-5 with hot spare unless writes are rare and highly sequential and write performance doesn't matter much. Enterprise (SAN-grade) RAID-6 implementations are halfway fast, but most RAID-6 controllers do remarkably little RAID-6 processing in hardware. – jgoldschrafe Nov 04 '10 at 00:08
1

I've used a number of older 11xx and 12xx series Areca controllers and currently manage a server with a 1880 series controller.

The way Volume management works with Areca cards is similar to how LVM works on Linux. First you assign drives to RAID Sets which are similar to Volume Groups. Then you create Volumes on the RAID Set and it's at the Volume level that you specify the RAID level. Hot Spares can be Global or tied to a particular RAID Set. But you can mix RAID levels on a single RAID set if you wish. They also support online raid level and volume migrations.

The performance impact of having multiple volumes on one RAID SET is going to depend on your workload and a number of other factors like system ram, controller cache etc. But keep in mind that most of the Areca controllers use standard memory modules for cache so they are field upgradeable. I just checked and it looks like the 1260 uses so-DIMMS but I didn't look to see what the largest size so-dimm it can accept.

3dinfluence
  • 12,449
  • 2
  • 28
  • 41
  • +1: thanks for the input on the controller itself. I've read parts of the manual so I'm familiar with the volume management but I haven't really worked with it too much (in terms of multiple RAID sets). – osij2is Nov 03 '10 at 20:14