I am trying to team 3 network cards together on 2 servers. I am trying to achieve a maximum throughput of 3Gbps to replicate data between the servers. The setup is simple, I have 2 servers with 3 Gigabit network card connected on the same Cisco switch. Exactly on port 1-2-3 for server-1 and port 4-5-6 for server-2. My interfaces configuration looks like:
auto lo
iface lo inet loopback
# The primary network interface
auto eth0
iface eth0 inet manual
bond-master bond0
auto eth1
iface eth1 inet manual
bond-master bond0
auto eth2
iface eth2 inet manual
bond-master bond0
auto bond0
iface bond0 inet static
address 192.168.1.11
netmask 255.255.255.0
gateway 192.168.1.1
bond-miimon 100
bond-mode 802.3ad
#bond-downdelay 200
#bond-updelay 200
bond-lacp-rate 1
# tried bond with slaves and no slaves interfaces
bond-slaves eth0 eth1 eth2
# bond-slaves none
I tried multiple configuration on these card but I always end up using only 1 network card at the time.
I tested the performance with iperf and netcat
# server-1
iperf -s
# server-2
iperf -c 192.168.1.10
# Wait for trafic
nc.traditional -l -p 5000 | pv > /dev/null
# Push trafic
dd if=/dev/zero | pv | nc.traditional 192.168.1.11 5000
We also tried many configuration on the Cisco switch, without port-channel and with port-channel and always only 1 network card used at the time. If we test individually each card they work at 1Gbps.
I can also say that in /proc/net/bonding/bond0 the mode shows 802.3ad and the LACP rate shows FAST. I have no link count failure and the 3 interfaces show up. I also verify each eth interface with ethtool and they look fine to me.
I was following this guide to set it up https://help.ubuntu.com/community/UbuntuBonding and I enabled the bonding module in the kernel with modprobe bonding
and when I use lsmod
to verify if the bonding module is up, yes it is in the list.
What are we missing to get this working?