0

I have a file server with multiple NICs (1Gbps) consumed by some different kinds of client.

Given my switch is configured hash using src and dest ip and port.

By manual load sharing I mean NICs on file server have different IPs and shares are mounted via different IPs.

By reading this ESXi link It seems better to do manual load sharing.

If bandwidth is the only consideration, hence the question, I would like to know which would provide more throughput?

For example, I could have a NFS share with IP 1 dedicated to ESXI server and workstations using SMBA with IP 2.

Yuan
  • 101
  • 1

2 Answers2

0

Bandwidth should not be the only consideration - also consider redundancy.

In a single switch environment, LACP will offer redundancy should one of your aggregated NIC's fail. Should this happen while doing manual load sharing then whichever service was bound to the failed NIC will also fail. This redundancy increases if you have multiple stacked switches and connect each LACP link to a different switch.

As for which method will provide more throughput, if I understand your scenario correctly, then link aggregation will only really increase throughput when multiple clients attempt to maximize their bandwidth with the server at the same time, whereas manual load sharing only really increases throughput when there are different clients using each service at the same time.

So which of these two scenarios do you think will be more common in your environment? And there's your answer.

(E.g. with LACP you could have 2 servers accessing an NFS share at 1Gbps each. Whereas with manual load sharing, you'll still only get 0.5Gbps per server, because they both must use the same physical link. But if you want to ensure that your NFS and SMB shares are both guaranteed at least 1Gbps each, then manual load sharing might be the way to go).

blacklight
  • 1,389
  • 1
  • 10
  • 19
  • Even with 1 server and 1 client, with 2 NICs on each of them, if the 2 NICs are configured using different IPs, if I was accessing 2 shares on the server, then I should get 2Gbps throughput. If I was using LACP, then I would get 1Gbps, woulnd't I? – Yuan Jan 12 '14 at 21:23
  • Yes, that is correct. I think the benefit of LACP is that if you have 2 clients, each trying to access one IP (eg an NFS share), then they can each get full bandwidth under LACP, but not via manual load sharing. So ultimately, if you really want a single client to get 2gb/s throughput, then manual will be the way to go. – blacklight Jan 12 '14 at 22:44
0

With your switch being configured to balance based on layers 3+4 (IP and port), you'd likely see a separate interface used for each data stream. Say you've got several connections from a single endpoint using a range of tcp ports. The switch will almost always choose a separate interface for each tcp session. I've only seen it happen once in several hundred different environments.

In this case, LACP is ideal because you won't have to manage all of the separate IPs to be used by the clients. The added redundancy is an added bonus.

Ryan Davies
  • 126
  • 1