- No, the PERC H700 Adapter will not be a bottleneck for 16 spinning
rust disks.
- You want the members of each RAID-1 pair on different
channels and expanders, in order to increase reliabiity. That way a
bad cable, channel, or expander doesn't take the whole RAID-1 set
offline. A 15K spinning rust disk can only do about 2 Gbps max with
sequential reads, so you can put three disks these per 6 Gbps
channel. Often you can do much more than threee, because only
backups really do streaming sequential reads or writes. All real
workloads have a lot of random IO, which will bring even a 15k disk down to just a few MB per second in throughput.
- Yes, you can mix disks, but why? Also, why aren't you using RAID-10 instead of a bunch of separate RAID-1 arrays?
- Unfortunately no, but any standards-compliant SAS Expander will work.
The real suggestion I would make is this: Unless you are running the large hadron colider, in general aggregate disk bandwidth doesn't matter, only IOPS matter for the overwhelming majority of workloads. Stop trying to make spining rust disks fast - they aren't. Disk is the new tape.
If you need performance, most workloads need IOPS more than bandwidth or capacity. Buy your Dell server with the cheapest SATA drives (to get the carriers), and then replace those cheap SATA drives with smallest numbber of Intel 500-series SSDs that meets your capacity needs. Dell's SSD offerings are terribly overpriced comparted with Intel SSDs from say NewEgg, even though the Intels perform better and are more reliable than whatever Dell is shipping for SSDs (Samsung?).
Make one big RAID-5 array of SSDs. Even just 3 modern MLC SSDs in RAID-5 will absolutely destroy 16 15k spinning rust disks in terms of IOPS, by a factor of 10x or more. Sequential throughput is a non-issue for most applications, but the SSDs will also be 2x faster than spinning disks in that regard. Use large capacity 7.2k SATA disks for backup media or for archiving cold data. You'll spend less money and use less power with SSDs.
Resistance to SSDs over reliability are largely FUD from conservative storage admins and SAN vendors who love their wasteful million-dollar EMC arrays. Recent "enterprise MLC SSDs" are at least as reliable as mechanical disks, and probably much more reliable (time will tell). Wear leveling makes write lifetime a non-issue, even in the server space. Your biggest worry is firmware bugs rather than hardware failure, which is why I suggest going with Intel SSDs.