1

I am planning to virtualize two servers where the bulk of network traffic will be between just these servers. Will I see a substantial benefit by configuring an internal network between the virtual machines, and only letting traffic destined for clients out via the bridged adapter?

I plan to use either VMWare ESXi or Hyper-V as the hypervisor and Windows Server 2008 as the guest OS. Is it even possible to set up the servers this way? If the servers see two paths between each other, how can I configure them to use the internal network in one case, and a bridged adapter in another?

Is it even worth trying to do this, or would the configuration complexity eventually come back to hurt me? I can see how it might cause problems if one of the servers is moved to a different VM host.

Nic
  • 13,425
  • 17
  • 61
  • 104

4 Answers4

5

I can't speak for Hyper-V but all versions of VMware ESX have software 'vSwitches' which will switch ethernet traffic between two VMs on the same host as fast as the processor will allow - usually significantly faster than even 10Gbps ethernet. In fact this configuration is the default and forcing each VM's traffic out onto the physical network, and if appropriate back in, is something that people go out of their way to achieve for certain security considerations. ESX/i v4 is particularly fast at this when using Windows 2008 by the way.

As I say I can't speak for Hyper-V but I strongly suspect it will do something very similar, I'm sure someone will answer this very quickly.

Chopper3
  • 101,299
  • 9
  • 108
  • 239
  • My takeaway here is that the cpu on my VM host should be as fast as possible. I think I'll try doing some bandwidth tests on various processors, just to be sure. – Nic Oct 22 '09 at 23:41
  • Take heart that pretty much any processor from the last 12 months will steam this kind of work in it's sleep :) – Chopper3 Oct 23 '09 at 08:19
4

I don't know much about VMWare. Under Hyper-V, however, traffic between two VMs running on the same physical host will not pass over the wire unless you actually configure two virtual switches on that host, each on a different physical Ethernet adapter, and you configure the VMs so that they are attached to different switches.

So you'd actually have to go out of your way to force the traffic onto the wire. Just attach the VMs to the same virtual Ethernet switch. External traffic will go on the wire and internal traffic will go through memory.

With that said, there are tradeoffs. Traffic which just goes through a virtual switch requires more CPU cycles than traffic that goes on the wire. Roughly speaking, this is because you can use hardware accelerators on the NIC when you put traffic on the wire.

Given today's powerful CPUs, and if your physical Ethernet adapters are 1Gb, you'll see much greater throughput between two VMs on a single physical host. But you'll also see greater overall CPU usage. You decide which is more important to you.

  • Jake
Jake Oshins
  • 5,146
  • 18
  • 15
0

It is possible to configure internal networks between the guest machines on all the major hypervisors, and it is more than likely that you will find a significant performance benefit.

As for the complexity, networking is an inherent part of any virtual infrastructure, and I don't see how a virtual interface that is not bridged to a physical interface is any more complex than one that is. Perhaps less so.

Roy
  • 4,376
  • 4
  • 36
  • 53
  • I was thinking that a VM configured to use internal networking would be slightly less portable. If it was only using bridged networking, then it would see the other servers just as easily even if the VM was moved to a new host. – Nic Oct 22 '09 at 08:19
  • That's true enough. If you move the machine outside it's current cluster, you would have to reconfigure networking. But then, depending on your infrastructure, this is often the case even when bridges to physical interfaces are used - you may need bridge virtual interfaces to the proper physical or vlan interfaces, which may include reconfiguration of physical switches and vlan configuration on the hypervisor. – Roy Oct 22 '09 at 09:04
0

If you can assign a separate NIC using RDM/PCI-PT to each VM, you will get native performance.

dyasny
  • 18,802
  • 6
  • 49
  • 64
  • In almost every scenario that would be slower than using a 'software' switch. – Chopper3 Oct 22 '09 at 10:52
  • how is that possible? When you use a real NIC, you eliminate several intermediate levels. I have managed to get proper 1Gb performance on a KVM machine like this, while with software or PV NIC drivers I was close but not quite there – dyasny Oct 22 '09 at 11:04
  • 1
    because when you use a physical NIC the very best speed you can achieve is the line-speed of that card (i.e. 1Gbps, 10Gbps etc). When you use a software switch to pass packets from one VM to another on the same host the packets are handled using the CPU as fast as it do it, which is almost always MUCH faster than 1Gbps or even 10Gbps - i.e. it doesn't limit the software switch to the speed of the physical link out of the box and onto the real LAN. The question originator was asking what was the fastest way and left the question of whether these two VMs were on the same server open. – Chopper3 Oct 22 '09 at 13:35
  • ah, you're right there, my fault for not reading the question well enough. – dyasny Oct 22 '09 at 13:44
  • no probs - not long finished my VCP4 so it's all rather TOO close to the front of my mind right now :) – Chopper3 Oct 22 '09 at 14:36
  • I've stayed away from vmware since 3.5u2 - switched to opensource, and hopefully will never look back – dyasny Oct 22 '09 at 15:55