1

I need to connect 6 nic to a single bond, while 2 of the nics are Broadcom and the other 4 are Intel, on RHEL5.4. Two questions please:
1. Is this configuration possible and what are prerequisites/configuration on the switch and the nics?
2. After configuring this bond, When looking in /proc/net/bonding/bond0(below) we see the 6 eth devices in their bond. However only two of them have the aggregator ID and it only shows two ports because of this. What does this mean? Is it normal?

# cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.4.0 (October 7, 2008)

Bonding Mode: IEEE 802.3ad Dynamic link aggregation<br>
Transmit Hash Policy: layer2 (0)<br>
MII Status: up<br>
MII Polling Interval (ms): 150<br>
Up Delay (ms): 0<br>
Down Delay (ms): 0<br>

802.3ad info<br>
LACP rate: slow<br>
Active Aggregator Info:<br>
      Aggregator ID: 13<br>
      Number of ports: 2<br>
      Actor Key: 9<br>
      Partner Key: 17<br>
      Partner Mac Address: 00:01:81:28:84:00<br>

Slave Interface: eth0<br>
MII Status: up<br>
Link Failure Count: 0<br>
Permanent HW addr: 00:26:b9:49:ed:45<br>
Aggregator ID: 13<br>

Slave Interface: eth1<br>
MII Status: up<br>
Link Failure Count: 1<br>
Permanent HW addr: 00:26:b9:49:ed:47<br>
Aggregator ID: 13<br>

Slave Interface: eth4<br>
MII Status: up<br>
Link Failure Count: 0<br>
Permanent HW addr: 00:1b:21:4a:79:58<br>
Aggregator ID: 15<br>

Slave Interface: eth5<br>
MII Status: up<br>
Link Failure Count: 0<br>
Permanent HW addr: 00:1b:21:4a:79:59<br>
Aggregator ID: 14<br>

Slave Interface: eth8<br>
MII Status: up<br>
Link Failure Count: 0<br>
Permanent HW addr: 00:1b:21:4a:77:b0<br>
Aggregator ID: 17<br>

Slave Interface: eth9<br>
MII Status: up<br>
Link Failure Count: 0<br>
Permanent HW addr: 00:1b:21:4a:77:b1<br>
Aggregator ID: 18<br>

Thank you, mku.

After the comments:

#ifconfig:
bond1     Link encap:Ethernet  HWaddr 00:26:B9:49:ED:45  <br>
          inet6 addr: fe80::226:b9ff:fe49:ed45/64 Scope:Link<br>
          UP BROADCAST RUNNING MASTER MULTICAST  MTU:1500  Metric:1<br>
          RX packets:519 errors:0 dropped:0 overruns:0 frame:0<br>
          TX packets:743 errors:0 dropped:0 overruns:0 carrier:0<br>
          collisions:0 txqueuelen:0 <br>
          RX bytes:52812 (51.5 KiB)  TX bytes:91867 (89.7 KiB)<br>

eth0      Link encap:Ethernet  HWaddr 00:26:B9:49:ED:45  <br>
          UP BROADCAST RUNNING SLAVE MULTICAST  MTU:1500  Metric:1<br>
          RX packets:264 errors:0 dropped:0 overruns:0 frame:0<br>
          TX packets:148 errors:0 dropped:0 overruns:0 carrier:0<br>
          collisions:0 txqueuelen:1000 <br>
          RX bytes:26203 (25.5 KiB)  TX bytes:17895 (17.4 KiB)<br>
          Interrupt:226 Memory:d4000000-d4012800 <br>

eth1      Link encap:Ethernet  HWaddr 00:26:B9:49:ED:45  <br>
          UP BROADCAST RUNNING SLAVE MULTICAST  MTU:1500  Metric:1<br>
          RX packets:187 errors:0 dropped:0 overruns:0 frame:0<br>
          TX packets:117 errors:0 dropped:0 overruns:0 carrier:0<br>
          collisions:0 txqueuelen:1000 <br>
          RX bytes:18177 (17.7 KiB)  TX bytes:14976 (14.6 KiB)<br>
          Interrupt:234 Memory:d6000000-d6012800 <br>
eth4      Link encap:Ethernet  HWaddr 00:26:B9:49:ED:45  <br>
          UP BROADCAST RUNNING SLAVE MULTICAST  MTU:1500  Metric:1<br>
          RX packets:22 errors:0 dropped:0 overruns:0 frame:0<br>
          TX packets:120 errors:0 dropped:0 overruns:0 carrier:0<br>
          collisions:0 txqueuelen:1000 <br>
          RX bytes:2728 (2.6 KiB)  TX bytes:14800 (14.4 KiB)<br>
          Memory:ddbc0000-ddbe0000 <br>

eth5      Link encap:Ethernet  HWaddr 00:26:B9:49:ED:45 <br> 
          UP BROADCAST RUNNING SLAVE MULTICAST  MTU:1500  Metric:1<br>
          RX packets:12 errors:0 dropped:0 overruns:0 frame:0<br>
          TX packets:118 errors:0 dropped:0 overruns:0 carrier:0<br>
          collisions:0 txqueuelen:1000 <br>
          RX bytes:1488 (1.4 KiB)  TX bytes:14632 (14.2 KiB)<br>
          Memory:ddbe0000-ddc00000 <br>

eth8      Link encap:Ethernet  HWaddr 00:26:B9:49:ED:45  <br>
          UP BROADCAST RUNNING SLAVE MULTICAST  MTU:1500  Metric:1<br>
          RX packets:22 errors:0 dropped:0 overruns:0 frame:0<br>
          TX packets:119 errors:0 dropped:0 overruns:0 carrier:0<br>
          collisions:0 txqueuelen:1000 <br>
          RX bytes:2728 (2.6 KiB)  TX bytes:14756 (14.4 KiB)<br>
          Memory:de7c0000-de7e0000 <br>

eth9      Link encap:Ethernet  HWaddr 00:26:B9:49:ED:45  <br>
          UP BROADCAST RUNNING SLAVE MULTICAST  MTU:1500  Metric:1<br>
          RX packets:12 errors:0 dropped:0 overruns:0 frame:0<br>
          TX packets:121 errors:0 dropped:0 overruns:0 carrier:0<br>
          collisions:0 txqueuelen:1000 <br>
          RX bytes:1488 (1.4 KiB)  TX bytes:14808 (14.4 KiB)<br>
          Memory:de7e0000-de800000<br>
ComBin
  • 163
  • 1
  • 1
  • 8
  • 1
    What would be helpful to answer this question is the ifconfig output for all the interfaces as well as the switch model and configuration. – Charles Hooper Jan 13 '10 at 17:29
  • Please see the output for the relevant interfaces added to the question. Is this enough? The switch is not on my premiss, I need to check it. –  Jan 14 '10 at 11:47

3 Answers3

1

You are most-likely hitting an upstream regression that crept into RHEL5.4.

You may want to check this out for more information:

https://bugzilla.redhat.com/show_bug.cgi?id=567604

0

Could you post the config file for the bond interface and the switch connecting to the NICs? You have to make sure that all NICs have the same Aggregator ID configured.

Make sure you set an ID also on the switch and set its trunk mode to "active".

PEra
  • 2,875
  • 18
  • 14
  • 1. Please see ifconfig output for the relevant interfaces added to the question. Is this enough? 2. Where can I configure Aggregator ID for the NICs? –  Jan 14 '10 at 11:42
0

http://studyhat.blogspot.com/2009/10/linux-nic-bonding.html

in above link check the mode config at you end which mode are using.

Rajat
  • 3,349
  • 22
  • 29
  • Thank you. I saw this before in RHEL documentation. According to the ethtool output, the NICs have same speed and duplex. So the question left is #2 - what is the Aggregator ID and if the output of /proc/net/bonding/bond0 correct. If not - what can be a reason for this? –  Jan 14 '10 at 15:08
  • from redhat labs suggestion mode 4 to use which i have also test with 4 nic but in my case i dont get such out put /proc/net/bonding/bond0,and that is tested from redhat labs. – Rajat Jan 14 '10 at 20:17