I'm building out a Linux based NAS server. My hardware consists of two 3ware 9500S-8 RAID cards and 10 2TB drives. (5 per card) The 10 drives are going to be used purely for data (the OS is installed on separate drives), formatted with the XFS file system.
My choices are:
- Have 3ware export the drives as JBOD, setup RAID6 in linux, (~16G of usable space, two drive can fail)
- Setup a 5-disk RAID5 array on each card, add the two resulting physical volumes to XFS (~16G usable, 1 drive can fail per array)
- Setup a 9-disk RAID5 array + 1 hot standby in 3ware (~16G usable, two drives can fail)
In performance testing, I was getting about 70Mb/s write on a 5-disk hardware RAID5 vs 60Mb/s software RAID5, so for performance reasons, I'd prefer to have the RAID card handle the RAID. I am concerned about having RAID 5 arrays that large though.
My questions:
- Are there any issues spanning a RAID5 array across 2 3ware cards?
- Would the time required to rebuild the 9-disk RAID5 negate the benefit of having a hot-standby drive?
- The OS will be running on a Compact-Flash drive, without a SWAP partition. (8G of RAM). Would a 16TB RAID6 software array be able to effectively function with that constraint?