13

I have a machine that has several VM's (5) and 3 physical networking cards (with each 2 ports), with a total of six 1Gbps ethernet ports.

I have a SPF-capable switch, having a total of 48Gbps bandwidth and a 10Gbps SPF link. The server has also one SPF port (10Gbps).

I'm curious what would the best setup be, performance wise (get the most out of every bit, least cpu usage) and why.

Would it be better to have all the VM's connected to the one SPF port then to the SPF port on the switch, or should I get 5 ethernet cables and connect them to 5 ports on the network switch?

If it's still a bit unclear, imagine this scenario:

Two PC's on the Switch want respectively to download a large file from VM A and the second pc from VM B. If they are connected with ethernet, each one will have it's own connection, so the connection from VM A will be switched to PC A, and simultinously the connection from VM B will be switched to PC B, is that right? And if you would connect both VM's to SPF, then the SPF port would be switching between PC A and B.

So which scenario would perform the best at maximum load? Why?

Edit: I wanted to keep this fairly generic so it could be applied to a global scenario, but details have been asked of the setup, here they are:

Server: PowerEdge T620
SPF Card: PEX10000SFP 10 gigabit
NICs: 3x NetXtreme BCM5720
OS: XenServer 6.2
CPU: Xeon E5-2609
Switch: T1600G-28TS
Guest OS's: Debian Wheezy (PV)

sebix
  • 4,313
  • 2
  • 29
  • 47
Gizmo
  • 289
  • 2
  • 11
  • Some details, like the virtualization software you're using, operating system types, the server make/model, the switch make/model... etc. would be helpful. – ewwhite Jan 11 '16 at 19:50
  • alright, added! – Gizmo Jan 11 '16 at 19:56
  • 2
    I gave an answer but my preference would always be to have at least one fail-over connection, on a different NIC card, on a different PCI port, on a different daughter card whenever possible. The performance gains though possible are never guaranteed and the risk is quite real. – Nick Young Jan 11 '16 at 20:18
  • if your switch supports link aggregation, use it to create a large fat pipe from all adapters. You'd have a bondX interface, which you can configure as a port on a linux or opevswitch bridge. You can then create virtual ports for VMs. Note that you might need to test different LACP modes to make sure you use multiple adapters. This might be helpful: http://blog.scottlowe.org/2012/10/19/link-aggregation-and-lacp-with-open-vswitch/ – Alec Istomin Feb 10 '16 at 04:26

1 Answers1

23

1 x 10Gb link for performance.

Otherwise if a single server needs to use 1.1Gbs to another server it can't because most load balancing systems use destination MAC or IP (Which would be the same).

This also eliminates issues where links are busier then other links because of the same fact, if the hash works out to be on the same link they end up on the same link except in special dynamic switch configs in VMWare

Nick Young
  • 688
  • 4
  • 14
  • 2
    +1 True. Receiving >1Gbps over a single TCP connection using multiple 1Gbps interfaces is still very hard to achieve in practice, but trivial for 10 Gbps interface. Think central backup solution :) – kubanczyk Jan 12 '16 at 00:48