I have two machines which are interconnected over 2 ports and I want both ports to be teamed to have a single interconnect link.
I've configured the team runner to be activebackup
and the link state checker to be arp_ping
. The result works and generally handles itself well, but there the active-passive nature of the connection seems to cause a problem with some scenarios leaving the port down:
[root@machine1 ~]# teamdctl team_master state
setup:
runner: activebackup
ports:
team_slave1
link watches:
link summary: up
instance[link_watch_0]:
name: arp_ping
link: up
down count: 0
team_slave0
link watches:
link summary: down
instance[link_watch_0]:
name: arp_ping
link: down
down count: 0
runner:
active port: team_slave1
I've configured the whole thing through the NetworkManager and to reproduce my problem, It's enough for me to take the active interface on both machines simultaneously with nmcli dev disconnect team_slave1
.
The result is (ip addr
):
10: team_master: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN qlen 1000
link/ether ...
And
[root@machine1 ~]# teamdctl team_master state
setup:
runner: activebackup
ports:
team_slave0
link watches:
link summary: down
instance[link_watch_0]:
name: arp_ping
link: down
down count: 0
runner:
active port:
Somehow the takeover just didn't happen, and it seems like the main reason is that the NetworkManager assumes the passive port is down and doesn't attempt to bring it up.
Has any one encountered something similar?
Any ideas what to do?
Using round-robin
runner seems to help, but it's not the scheme I'd prefer.