1

I'm trying to get NIC bonding to work with balance-rr so that three NIC ports are combined, so that instead of getting 1 Gbps we get 3 Gbps. We are doing this on two servers connected to the same switch. However, we're only getting the speed of one physical link.

We are using 1 Dell PowerConnect 5324, SW version 2.0.1.3, Boot version 1.0.2.02, HW version 00.00.02. Both servers are CentOS 5.9 (Final) running OnApp Hypervisor (CloudBoot)

Server 1 is using ports g5-g7 in port-channel 1. Server 2 is using ports g9-g11 in port-channel 2.

Switch

show interface status

Port     Type         Duplex  Speed Neg      ctrl State       Pressure Mode
-------- ------------ ------  ----- -------- ---- ----------- -------- -------
g1       1G-Copper      --      --     --     --  Down           --     --
g2       1G-Copper    Full    1000  Enabled  Off  Up          Disabled Off
g3       1G-Copper      --      --     --     --  Down           --     --
g4       1G-Copper      --      --     --     --  Down           --     --
g5       1G-Copper    Full    1000  Enabled  Off  Up          Disabled Off
g6       1G-Copper    Full    1000  Enabled  Off  Up          Disabled Off
g7       1G-Copper    Full    1000  Enabled  Off  Up          Disabled On
g8       1G-Copper    Full    1000  Enabled  Off  Up          Disabled Off
g9       1G-Copper    Full    1000  Enabled  Off  Up          Disabled On
g10      1G-Copper    Full    1000  Enabled  Off  Up          Disabled On
g11      1G-Copper    Full    1000  Enabled  Off  Up          Disabled Off
g12      1G-Copper    Full    1000  Enabled  Off  Up          Disabled On
g13      1G-Copper      --      --     --     --  Down           --     --
g14      1G-Copper      --      --     --     --  Down           --     --
g15      1G-Copper      --      --     --     --  Down           --     --
g16      1G-Copper      --      --     --     --  Down           --     --
g17      1G-Copper      --      --     --     --  Down           --     --
g18      1G-Copper      --      --     --     --  Down           --     --
g19      1G-Copper      --      --     --     --  Down           --     --
g20      1G-Copper      --      --     --     --  Down           --     --
g21      1G-Combo-C     --      --     --     --  Down           --     --
g22      1G-Combo-C     --      --     --     --  Down           --     --
g23      1G-Combo-C     --      --     --     --  Down           --     --
g24      1G-Combo-C   Full    100   Enabled  Off  Up          Disabled On

                                          Flow    Link
Ch       Type    Duplex  Speed  Neg      control  State
-------- ------- ------  -----  -------- -------  -----------
ch1      1G      Full    1000   Enabled  Off      Up
ch2      1G      Full    1000   Enabled  Off      Up
ch3         --     --      --      --       --    Not Present
ch4         --     --      --      --       --    Not Present
ch5         --     --      --      --       --    Not Present
ch6         --     --      --      --       --    Not Present
ch7         --     --      --      --       --    Not Present
ch8         --     --      --      --       --    Not Present

Server 1:

cat /etc/sysconfig/network-scripts/ifcfg-eth3

DEVICE=eth3
HWADDR=00:1b:21:ac:d5:55
USERCTL=no
BOOTPROTO=none
ONBOOT=yes
MASTER=onappstorebond
SLAVE=yes

cat /etc/sysconfig/network-scripts/ifcfg-eth4

DEVICE=eth4
HWADDR=68:05:ca:18:28:ae
USERCTL=no
BOOTPROTO=none
ONBOOT=yes
MASTER=onappstorebond
SLAVE=yes

cat /etc/sysconfig/network-scripts/ifcfg-eth5

DEVICE=eth5
HWADDR=68:05:ca:18:28:af
USERCTL=no
BOOTPROTO=none
ONBOOT=yes
MASTER=onappstorebond
SLAVE=yes

cat /etc/sysconfig/network-scripts/ifcfg-onappstorebond

DEVICE=onappstorebond
IPADDR=10.200.52.1
NETMASK=255.255.0.0
GATEWAY=10.200.2.254
NETWORK=10.200.0.0
USERCTL=no
BOOTPROTO=none
ONBOOT=yes

cat /proc/net/bonding/onappstorebond

Ethernet Channel Bonding Driver: v3.4.0-1 (October 7, 2008)

Bonding Mode: load balancing (round-robin)
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0

Slave Interface: eth3
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 00:1b:21:ac:d5:55

Slave Interface: eth4
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 68:05:ca:18:28:ae

Slave Interface: eth5
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 68:05:ca:18:28:af

Server 2:

cat /etc/sysconfig/network-scripts/ifcfg-eth3

DEVICE=eth3
HWADDR=00:1b:21:ac:d5:a7
USERCTL=no
BOOTPROTO=none
ONBOOT=yes
MASTER=onappstorebond
SLAVE=yes

cat /etc/sysconfig/network-scripts/ifcfg-eth4

DEVICE=eth4
HWADDR=68:05:ca:18:30:30
USERCTL=no
BOOTPROTO=none
ONBOOT=yes
MASTER=onappstorebond
SLAVE=yes

cat /etc/sysconfig/network-scripts/ifcfg-eth5

DEVICE=eth5
HWADDR=68:05:ca:18:30:31
USERCTL=no
BOOTPROTO=none
ONBOOT=yes
MASTER=onappstorebond
SLAVE=yes

cat /etc/sysconfig/network-scripts/ifcfg-onappstorebond

DEVICE=onappstorebond
IPADDR=10.200.53.1
NETMASK=255.255.0.0
GATEWAY=10.200.3.254
NETWORK=10.200.0.0
USERCTL=no
BOOTPROTO=none
ONBOOT=yes

cat /proc/net/bonding/onappstorebond

Ethernet Channel Bonding Driver: v3.4.0-1 (October 7, 2008)

Bonding Mode: load balancing (round-robin)
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0

Slave Interface: eth3
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 00:1b:21:ac:d5:a7

Slave Interface: eth4
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 68:05:ca:18:30:30

Slave Interface: eth5
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 68:05:ca:18:30:31

Here are the results of iperf.

------------------------------------------------------------
Client connecting to 10.200.52.1, TCP port 5001
TCP window size: 27.7 KByte (default)
------------------------------------------------------------
[  3] local 10.200.3.254 port 53766 connected with 10.200.52.1 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec   950 MBytes   794 Mbits/sec
  • Do you know if the PowerConnect supports balance-rr teaming? Many switches today do not support this anymore and lean more towards the LACP 802.3ad standard for bonding NICs. – Rex Nov 06 '13 at 16:09

1 Answers1

2

The incoming load balancing, from the switch to the system, is controlled by the switch.

You probably have 3Gbps of unordered TCP transmit, but only 1Gbps of receive because the switch is only sending down one slave.

You aren't getting the full 1Gbps because balance-rr often results in out-of-order TCP traffic, so TCP's working overtime to re-order your iperf stream.

In my experience it's practically impossible to load balance a single TCP stream reliably.

Properly configured bonding lets you have a total throughput of the slaves' bandwidth under the right conditions, but your maximum throughput is the maximum speed of one slave.

Personally I would use Mode 2 (with EtherChannel on the switch) or Mode 4 (with LACP on the switch).

If you need faster than 1Gbps, you need a faster NIC.

suprjami
  • 3,536
  • 21
  • 29