0

I'm building out a Linux based NAS server. My hardware consists of two 3ware 9500S-8 RAID cards and 10 2TB drives. (5 per card) The 10 drives are going to be used purely for data (the OS is installed on separate drives), formatted with the XFS file system.

My choices are:

  1. Have 3ware export the drives as JBOD, setup RAID6 in linux, (~16G of usable space, two drive can fail)
  2. Setup a 5-disk RAID5 array on each card, add the two resulting physical volumes to XFS (~16G usable, 1 drive can fail per array)
  3. Setup a 9-disk RAID5 array + 1 hot standby in 3ware (~16G usable, two drives can fail)

In performance testing, I was getting about 70Mb/s write on a 5-disk hardware RAID5 vs 60Mb/s software RAID5, so for performance reasons, I'd prefer to have the RAID card handle the RAID. I am concerned about having RAID 5 arrays that large though.
My questions:

  1. Are there any issues spanning a RAID5 array across 2 3ware cards?
  2. Would the time required to rebuild the 9-disk RAID5 negate the benefit of having a hot-standby drive?
  3. The OS will be running on a Compact-Flash drive, without a SWAP partition. (8G of RAM). Would a 16TB RAID6 software array be able to effectively function with that constraint?
John P
  • 1,679
  • 6
  • 38
  • 59
  • have you seen this http://blog.backblaze.com/2011/07/20/petabytes-on-a-budget-v2-0revealing-more-secrets/ – tony roth Jul 26 '11 at 18:56
  • Yes - they went with software RAID6 with plain SATA cards. My preference is to use the hardware RAID since my cards support and I've seen about 15% better performance with it. – John P Jul 26 '11 at 19:25

1 Answers1

1

Friends don't let friends use RAID-5. The rebuild time after a failure with 2TB drives, especially in such a large array, is massive. So, that's Right Out. The presence or absence of a hot spare really doesn't help you; you'd never run a RAID array purposely degraded, so you'll be replacing that dead drive Real Quick, so even without a hot spare you'll be into rebuild pretty quick, but if you're running your RAID array hard (which it certainly sounds like you'll be doing, if you're worried about a piddling 15% performance increase) then there isn't a lot of spare IOPS for rebuilding the array. (Incidentally, if that 15% is important to you, the performance degradation during rebuild is probably going to make the performance drop below your requirements. Just sayin').

You haven't mentioned what your use case for the RAID array is, which makes it nearly impossible to make a concrete recommendation, but my preferences are:

  • For performance: RAID-10.
  • For capacity: RAID-10, and just buy more damned disks (they're so cheeeeeap).
  • If you really, really don't care about your data: RAID-0.
  • If you're backblaze: do you have any job openings?

If your RAID cards don't support RAID-10, get better RAID cards. I'm a bit surprised you've gone out and bought a couple of RAID cards before planning your storage arrangements. A teensy bit backwards, to my way of thinking.

womble
  • 96,255
  • 29
  • 175
  • 230
  • The space will all be exported as ISCSI drives. (averaging about 500G per share) and a couple NFS shares. The ISCSI shares will be primary storage for a number of dedicated servers, and XEN images. The hardware I inherited (it had a much smaller array running on it earlier). – John P Jul 26 '11 at 23:42
  • That's not a use case, it's an architecture diagram. IOPS matter. – womble Jul 27 '11 at 08:49