I currently have 2 gbit network interfaces bonded as bond0. Is it possible to also have the slave interfaces eth0 and eth1 assigned ip addresses and have traffic routed directly out either one like when they not enslaved in a bonding setup?
I am using balance-alb bonding mode and the eth1 interface shares a mac address with bond0.
ifconfig is as follows:
bond0 Link encap:Ethernet HWaddr 00:1e:c9:b8:61:3e
inet addr:x.x.x.x Bcast:x.x.x.255 Mask:255.255.255.0
inet6 addr: fe80::21e:c9ff:feb8:613e/64 Scope:Link
UP BROADCAST RUNNING MASTER MULTICAST MTU:1500 Metric:1
RX packets:27055 errors:0 dropped:0 overruns:0 frame:0
TX packets:1181 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:1764025 (1.6 MiB) TX bytes:96662 (94.3 KiB)
eth0 Link encap:Ethernet HWaddr 00:1e:c9:b8:61:3c
UP BROADCAST RUNNING SLAVE MULTICAST MTU:1500 Metric:1
RX packets:11258 errors:0 dropped:0 overruns:0 frame:0
TX packets:506 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:723893 (706.9 KiB) TX bytes:33394 (32.6 KiB)
Interrupt:16 Memory:f8000000-f8012800
eth1 Link encap:Ethernet HWaddr 00:1e:c9:b8:61:3e
UP BROADCAST RUNNING SLAVE MULTICAST MTU:1500 Metric:1
RX packets:15797 errors:0 dropped:0 overruns:0 frame:0
TX packets:675 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:1040132 (1015.7 KiB) TX bytes:63268 (61.7 KiB)
Interrupt:16 Memory:f4000000-f4012800
/etc/network/interfaces is as follows:
auto bond0
iface bond0 inet static
address x.x.x.x
gateway x.x.x.254
broadcast x.x.x.255
netmask 255.255.255.0
up /sbin/ifenslave bond0 eth1 eth0
down /sbin/ifenslave -d bond0 eth1 eth0
Some background around why I would like to do this:
Initially I had a multilink routed setup using iproute2 rules, but outgoing the bonding seems to perform far better. Unfortunately I'm unable to get any performance improvement out of any of the bonding modes for incoming traffic.
This multilink routed setup seemed to work okay for incoming, certainly not the performance improvement I've had from bonding outgoing traffic, but better than 1gbit.
So I'm trying to achieve the best of both worlds.
Is this possible?
Also another quick question, why is it that the incoming / receive balancing never works on balance-alb? It seems to be the only benefit over balance-tlb and never seems to work.