1

I'm on my way to setup a 2 nodes hyperv cluster with an HP P2000 SAN. What are the pros/cons for SAS 6gb vs ISCSI 1gb ? Thanks

despe
  • 11
  • 2

4 Answers4

2

P2000sa Pros - 6Gbps is faster than 1/2Gbps (unless you buy the 10Gbps version). P2000sa Cons - Clustering may not work as traditionally this dual-SAS-access method doesn't.

P2000i Pros - Definitely works. P2000i Cons - 1/2Gbps is slower than 6Gbps (unless you buy the 10Gbps version).

Basically unless you get a chance to fully test this clustering functionality on the SAS version I'd avoid it and go for the iSCSI version. Alternatively consider the FC version, it's fast at 8Gbps, virtually the same price as the iSCSI version, you'd only need 2 x FC HBAs, no switches and definitely works.

Chopper3
  • 101,299
  • 9
  • 108
  • 239
  • whoaw, fast response, thanks you...can you explain me more on the fact clustering may not work with SAS ? – despe Mar 01 '11 at 13:07
  • Well when you cluster you create a 'clustered file system', in this case MSDTC needs a shared 'quorum' area as a bare minimum. The Cluster service arbitrates reads and writes between the cluster members on a slightly-modified NTFS partition/disk to enable this sharing. No you'll also need a shared partion/disk to store your actual VMs on so that if/when one HV host goes down the other one can take over, this disk will also be managed by cluster services on this ntfs disk. FC/FCoE & iSCSI all 100% work for this, shared SAS doesn't always, it may work, but basically you need to check/test. – Chopper3 Mar 01 '11 at 13:20
  • i understand now why i didn't see any config on hyperv with SAS SAN on the web...(and my vendor seems to be ignorant of this kind of problem)...so if i choose ISCSI 1gb (FC is too expensive for us), i need 2 switchs for the redundancy and 2 network cards dedicated for the iscsi (i have to use MPIO i think), right ? – despe Mar 01 '11 at 13:38
  • I'd suggest dual NICs dedicated for the iSCSI traffic yes but can't you just patch them straight to the P2000 without a switch? you only have two servers? patch one NIC to each controller? – Chopper3 Mar 01 '11 at 16:20
  • of course you're right, i've just forgotten the p2000 has 2 controller, thanks for your time – despe Mar 02 '11 at 07:48
  • one last question...my 2 nodes proliant have 4 nics each. With 2 dedicated for the iscsi, how can i attribute the 2 others ? one for management and one for VM (but what about CSV and live migration ?) do i need to had more nics ? thanks – despe Mar 02 '11 at 07:55
  • Unless you expect a full 1Gbps of VM traffic I'd be tempted to create a single vSwitch with both NICs (possibly set to round-robin active/active mode) and push both the management and VM traffic down that vSwitch. – Chopper3 Mar 02 '11 at 09:40
2

iSCSI is the more scalable approach if you plan on growing to substantially more hosts as you use regular ethernet growing principles for scaling. To my knowledge there aren't that many SAS switches available and some of them may need special settings to allow sharing of LUNs to multiple hosts simultaneously which might be needed for certain setups like a clustered file system.

pfo
  • 5,700
  • 24
  • 36
  • Thanks, do you have some link to point me to the problems with SAS and an hyperv cluster ? – despe Mar 01 '11 at 13:17
  • We have a HP MSA60 sas connections to two hyper v servers. Even though the msa60 is configured to not let the two windows servers be able to see each others slices of the disks, they can and we've had problems with that. One time, both of the windows servers wrote to the same partition, and thus messing it up completely. A couple of vhds disappeared, others got corrupted etc etc. Don't know anything about using a clustered filesystem in this setup, but I would choose the iSCSI solution. – 3molo Mar 01 '11 at 13:23
  • @3molo - that's interesting, I know that can be done but haven't 'met' anyone who's tried it, I'd expect the kind of behaviour you've seen and hence why I wouldn't recommend that approach, that said I bet some clever spark could get it working via a lot of tweaking and save themselves a fortune :) – Chopper3 Mar 01 '11 at 13:29
  • Just to clarify: a clustered file system is something like VMFS, the modified NTFS from HyperV setups (and many others). – pfo Mar 01 '11 at 14:17
1

We have such a scenario:

2 HP DL370 G7

  • Windows 2008 R2
  • HP HB08s Dual port E-SAS
  • Failover Cluster
  • Hyper-v R2

1 HP P2000 G3 SAS 6GB

2 Vdisks

  • 1° Vdisk has 10 146GB 15K disks in RAID 10 + 2 spares
  • 2° Vdisk has 10 300GB 10K disks in RAID 10

Vdisk 1 is served as LUN001 Vdisk 2 is served as LUN002

Both Vdisks are available to Failover cluster as CSV.

Every VM is in Failover cluster

Currently running 17 Server ( public and private ) flowless.

Probably the best buy we ever made for such money. Sure you can find much better, expansion is not the maximum ( Redundant port just 4 servers ) and you need extra e-SAS cards to access the P2000 but is fast and easy to use.

0

If you're going to have no more than 4 nodes in your cluster, then SAS is the way to go - it outperforms iSCSI hands down and nears Fibre channel performance levels for half the price. It costs less as you don't need any switching and this also means you’ve less interconnects so les potential faults. The CONs are that if you need to scale out beyond 4 nodes then you won't have redundant connections to the MSA's controllers and then 8 nodes is the absolute physical limit.

If it is high scalability you're after though you should be looking at Left Hand instead of the iSCSI MSA (or P2000 as they are now called)

If you’re looking for absolute performance then the only way is the 8Gb fibre.