1

I'm currently in a discussion about how LACP and load balancing is working.

Assume, there are two servers, both using 4 nics (1Gbit) and they are connected to the same switch with aggregated Links.

Which "statements" are true?

  • Each server could theoretically deliver 4 Gbit of data, if there are enough clients requesting data (at least one per link)
  • The servers can talk to each other with 4Gbit/s.
  • The servers can talk to each other with 1Gbit/s, because the algortihms to balance traffic will always choose the same NIC out of the 4 available links.
  • When using Round-Robin, the servers can communicate at a speed > 1GBit/s, but will encounter out-of-order-packets, which needs to be resorted and at the end resulting in a much lower transfer speed than 4GB/s.
  • A connection between hostA and hostB will never be faster than the speed of a single link, even if you are grouping 8 or more nics.

cheers, dognose

dognose
  • 164
  • 10

1 Answers1

2

Two basic things to keep in mind:

  • Traffic is distributed packet by packet
  • All packets associated with a given “conversation” are transmitted on the same link to prevent mis-ordering

That second point has some variability between Operating Systems and implementations as to what is a "conversation" so the answers to some of your questions are not always the same. Generally if there are a lot of "conversations" the link will perform well but single stream benchmarking between two systems will be limited to what a single link can provide.

Ref: PDF IEEE 802.3ad Link Aggregation (LAG) what it is, and what it is not

Brian
  • 3,476
  • 18
  • 16
  • 1
    Thx for the answer and link. So it comes down to what a conversation is seen as. I especially like the conclusion: "LAG is good, but it’s not as good as a fatter pipe". – dognose Feb 07 '16 at 12:36