Have two network cards bonded in a balance-rr configuration:
root@server:~# ip link show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: enp1s0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 9000 qdisc mq master bond0 state UP mode DEFAULT group default qlen 1000
link/ether 4a:76:c7:cc:8a:73 brd ff:ff:ff:ff:ff:ff
3: enp0s31f6: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 9000 qdisc fq_codel master bond0 state UP mode DEFAULT group default qlen 1000
link/ether 4a:76:c7:cc:8a:73 brd ff:ff:ff:ff:ff:ff
4: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 9000 qdisc noqueue state UP mode DEFAULT group default qlen 1000
link/ether 4a:76:c7:cc:8a:73 brd ff:ff:ff:ff:ff:ff
The bond works great and is configured below via netplan:
network:
ethernets:
enp0s31f6:
dhcp4: false
enp1s0:
dhcp4: false
version: 2
bonds:
bond0:
interfaces: [enp0s31f6,enp1s0]
addresses: [10.0.10.10/16]
gateway4: 10.0.0.1
mtu: 9000
nameservers:
addresses: [10.0.0.1]
parameters:
mode: balance-rr
mii-monitor-interval: 100
However I'm noticing something peculiar. When transferring large files through NFS from a single server (10G connection) I achieve 180MB/s max, with ~120MB/s coming through enp0s31f6 and ~60MB/s coming through enp1s0. If I unplug enp0s31f6, the other interface, enp1s0 achieves maximum throughput at 120MB/s.
Any idea why the load appears to be distributed in a 2:1 ratio?