We use a dual network card server with bond in 802.3ad mode.
# cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v5.14.21
Bonding Mode: IEEE 802.3ad Dynamic link aggregation
Transmit Hash Policy: layer3+4 (1)
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0
Peer Notification Delay (ms): 0
802.3ad info
LACP rate: slow
Min links: 0
Aggregator selection policy (ad_select): stable
System priority: 65535
System MAC address: 0c:xx:xx:xx:99:40
Active Aggregator Info:
Aggregator ID: 1
Number of ports: 2
Actor Key: 21
Partner Key: 13
Partner Mac Address: 00:xx:5e:00:01:00
Slave Interface: enP1p1s0f0
MII Status: up
Speed: 25000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 0c:xx:xx:xx:99:40
Slave queue ID: 0
Aggregator ID: 1
Actor Churn State: none
Partner Churn State: none
Actor Churned Count: 0
Partner Churned Count: 0
details actor lacp pdu:
system priority: 65535
system mac address: 0c:xx:xx:xx:99:40
port key: 21
port priority: 255
port number: 1
port state: 61
details partner lacp pdu:
system priority: 65535
system mac address: 00:xx:5e:00:01:00
oper key: 13
port priority: 255
port number: 14
port state: 61
Slave Interface: enP1p1s0f1
MII Status: up
Speed: 25000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 0c:xx:xx:xx:99:41
Slave queue ID: 0
Aggregator ID: 1
Actor Churn State: none
Partner Churn State: none
Actor Churned Count: 0
Partner Churned Count: 0
details actor lacp pdu:
system priority: 65535
system mac address: 0c:xx:xx:xx:99:40
port key: 21
port priority: 255
port number: 2
port state: 61
details partner lacp pdu:
system priority: 65535
system mac address: 00:xx:5e:00:01:00
oper key: 13
port priority: 255
port number: 32782
port state: 61
The outgoing data on both network cards is balanced, but the incoming is concentrated on one network card.
bond0: flags=5187<UP,BROADCAST,RUNNING,MASTER,MULTICAST> mtu 1500
inet xxx.xxx.165.215 netmask 255.255.255.192 broadcast xxx.xxx.165.255
inet6 xxxx::xxx:a1ff:fe5a:9940 prefixlen 64 scopeid 0x20<link>
ether 0c:xx:xx:xx:99:40 txqueuelen 1000 (Ethernet)
RX packets 286959178 bytes 327689609899 (305.1 GiB)
RX errors 0 dropped 24884 overruns 0 frame 0
TX packets 394378427 bytes 477465769012 (444.6 GiB)
TX errors 0 dropped 10 overruns 0 carrier 0 collisions 0
enP1p1s0f0: flags=6211<UP,BROADCAST,RUNNING,SLAVE,MULTICAST> mtu 1500
ether 0c:xx:xx:xx:99:40 txqueuelen 1000 (Ethernet)
RX packets 7184101080 bytes 9239686504798 (8.4 TiB) <= MOST DATA
RX errors 0 dropped 42969 overruns 0 frame 0
TX packets 4769768089 bytes 6383749337305 (5.8 TiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
enP1p1s0f1: flags=6211<UP,BROADCAST,RUNNING,SLAVE,MULTICAST> mtu 1500
ether 0c:xx:xx:xx:99:40 txqueuelen 1000 (Ethernet)
RX packets 129986535 bytes 186499719199 (173.6 GiB)
RX errors 0 dropped 1294 overruns 0 frame 0
TX packets 4073423613 bytes 5403193492511 (4.9 TiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
I have two questions:
- Why RX packets not balanced in both NICs?
- Why does each statistic of bond0 not equal to the sum of enP1p1s0f0 and enP1p1s0f1?
additional information: This server is linked to a stacked switch, is it related to the imbalance?