1

I'm not sure how many NICs I need to use to have vmotion support with my SAN. See the link for a picture, it's laughably crude. Sketch

PHLiGHT
  • 1,041
  • 11
  • 25

2 Answers2

2

Assuming that you are using iSCSI to connect to your SAN, you'll want at least 8 GigE ports on your VMware Hosts. You'll want 2 for the management port, 2 for vMotion, 2 for iSCSI and 2 for Guest machines (or more than 2 depending on how many vLANs you want, and if they need to have separate physical NICs or not).

Now each pair of NICs should also be on a seperate vLAN to isolate the network traffic as well. If you have less than 8 physical NICs then you run the risk of an outage if a network card, cable or network switch were to fail.

mrdenny
  • 27,174
  • 4
  • 41
  • 69
1

See this list of cabling strategies for various numbers of physical NIC interfaces.

http://www.networkworld.com/community/taxonomy/term/17790

For your case, you have FOUR physical NICs (pNIC). Here's the explanation of best practices for that setup:

pNIC0 -> vSwitch0 -> Portgroup0 (service console)
................. -> Portgroup1 (VMotion)
pNIC1 -> vSwitch0 -> Portgroup2 (Storage Network)
pNIC2 -> vSwitch1 -> Portgroup3 (VM Network)
pNIC3 -> vSwitch1 -> Portgroup3 (VM Network)
ewwhite
  • 197,159
  • 92
  • 443
  • 809
  • Thanks. I have a ticket in with Fujitsu for the SAN. I'm not sure how the SAN should be connected to the switch. – PHLiGHT Jan 02 '11 at 04:25
  • We're doing something similar with our ESXi implementation but we have two Storage Network interfaces (with six total ports). The suggestion from mrdenny is what I'm going to lobby my boss for but it's a lot of extra money to make this jump for us (multiple servers all need a new network card, back plane [thanks HP], and network ports to plug into) and while it would be nice it may not be 100% necessary if you don't want/need 99.999% uptime. – Scott Keck-Warren Jan 03 '11 at 14:30