0

this is my network configuration:

bond interface 192.168.101.50/24 has two salaves: eth1 (192.168.101.1) that is connected to 192.168.101.2 and eth2 (192.168.101.10) that is connected to 192.168.101.11.

I created a bond interface bond0 in active-backup:

bond0     Link encap:Ethernet  HWaddr 00:E0:4C:48:09:36
          inet addr:192.168.101.50  Bcast:0.0.0.0  Mask:255.255.255.0
          inet6 addr: fe80::2e0:4cff:fe48:936/64 Scope:Link
          UP BROADCAST RUNNING MASTER MULTICAST  MTU:1500  Metric:1
          RX packets:70 errors:0 dropped:0 overruns:0 frame:0
          TX packets:205 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:4968 (4.8 KiB)  TX bytes:14126 (13.7 KiB)

Then I have two ethernet interfaces

eth1      Link encap:Ethernet  HWaddr 00:E0:4C:48:09:36
          inet addr:192.168.101.1  Bcast:192.168.101.255  Mask:255.255.255.0
          UP BROADCAST RUNNING SLAVE MULTICAST  MTU:1500  Metric:1
          RX packets:84 errors:0 dropped:0 overruns:0 frame:0
          TX packets:218 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:5764 (5.6 KiB)  TX bytes:17132 (16.7 KiB)

eth2      Link encap:Ethernet  HWaddr 00:E0:4C:48:09:36
          inet addr:192.168.101.10  Bcast:192.168.101.255  Mask:255.255.255.0
          UP BROADCAST RUNNING SLAVE MULTICAST  MTU:1500  Metric:1
          RX packets:41 errors:0 dropped:0 overruns:0 frame:0
          TX packets:88 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:2760 (2.6 KiB)  TX bytes:6412 (6.2 KiB)

Then I defined a multiqueue scheduling, defineing two queues: queue 1 for eth1 and queue 2 for eth2. This are the rules:

tc filter add dev bond0 protocol ip parent 1: prio 1 u32 match ip \
        dst 192.168.101.2 action skbedit queue_mapping 1
tc filter add dev bond0 protocol ip parent 1: prio 1 u32 match ip \
        dst 192.168.101.11 action skbedit queue_mapping 2

This is the actual configuration:

9: eth1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc fq_codel master bond0 state UP mode DEFAULT group default qlen 1000
    link/ether 00:e0:4c:48:09:36 brd ff:ff:ff:ff:ff:ff promiscuity 0 minmtu 68 maxmtu 9194
    bond_slave state ACTIVE mii_status UP link_failure_count 0 perm_hwaddr 00:e0:4c:48:08:10 queue_id 1 addrgenmode eui64 numtxqueues 1 numrxqueues 1 gso_max_size 16354 gso_max_segs 65535
    RX: bytes  packets  errors  dropped missed  mcast
    5764       84       0       0       0       0
    RX errors: length   crc     frame   fifo    overrun
               0        0       0       0       0
    TX: bytes  packets  errors  dropped carrier collsns
    17132      218      0       0       0       0
    TX errors: aborted  fifo   window heartbeat transns
               0        0       0       0       16
14: eth2: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc fq_codel master bond0 state UP mode DEFAULT group default qlen 1000
    link/ether 00:e0:4c:48:09:36 brd ff:ff:ff:ff:ff:ff promiscuity 1 minmtu 68 maxmtu 9194
    bond_slave state BACKUP mii_status UP link_failure_count 0 perm_hwaddr 00:e0:4c:48:09:36 queue_id 2 addrgenmode eui64 numtxqueues 1 numrxqueues 1 gso_max_size 16354 gso_max_segs 65535
    RX: bytes  packets  errors  dropped missed  mcast
    2760       41       0       0       0       0
    RX errors: length   crc     frame   fifo    overrun   nohandler
               0        0       0       0       0       5
    TX: bytes  packets  errors  dropped carrier collsns
    6412       88       0       0       0       0
    TX errors: aborted  fifo   window heartbeat transns
               0        0       0       0       4

I thought that defining the multiqueue mod I was able to ping the device 192.168.101.11 (via eth2), that is connected to the BACKUP interface. But there is no way to ping it. Is there a solution to ping the device connected to the BACKUP interface? I can't set the state ACTIVE, i need to ping while eth2 is in backup. Thank you so much.

1 Answers1

0

One way that's relatively easy is to add a static route.

example: WAN 1 Gateway IP is: 1.2.3.4 WAN 1 Gateway IP is: 4.3.2.1

/ip route add distance=1 dst-address=8.8.8.8 gateway=1.2.3.4 add distance=1 dst-address=8.8.4.4 gateway=4.3.2.1

Now you can rest assured that pings to Google's primary DNS 8.8.8.8 will go out WAN1 & pings to Google's secondary DNS 8.8.4.4 will go out WAN2 and if one is down the pings to that one will fail.

There are more elegant solutions but this one is a quick way.