I use a wireguard
(a VPN) to connect to a server. This server has two interfaces (of interest for my problem - all the others are virtual ethernet links to connect to containers, no problem with these):
root@srv ~# ip a
(...)
6: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 12:5b:d3:1d:51:cf brd ff:ff:ff:ff:ff:ff
inet 192.168.10.2/24 brd 192.168.10.255 scope global dynamic br0
valid_lft 60366sec preferred_lft 60366sec
(...)
25: wg0: <POINTOPOINT,NOARP,UP,LOWER_UP> mtu 1420 qdisc noqueue state UNKNOWN group default qlen 1000
link/none
inet 192.168.20.1/32 scope global wg0
valid_lft forever preferred_lft forever
The routing is
root@srv ~# ip r
default via 192.168.10.1 dev br0 proto dhcp metric 1024
192.168.10.0/24 dev br0 proto kernel scope link src 192.168.10.2
192.168.20.0/24 via 192.168.20.1 dev wg0 proto static
192.168.20.0/24 via 192.168.10.2 dev br0 proto dhcp metric 1024
When connecting via the VPN, the following scenarios are OK:
ssh
to the wireguard IP192.168.20.1
ssh
to any other wireguard network devices on192.168.20.0/24
ssh
to any other device on192.168.10.0/24
(except for the server itself, see below)
The one which is KO is
ssh
to192.168.10.2
- thebr0
interface on the server where wireguard is installed
sshd
is listening on *.22
and I can ssh 192.168.10.2
(to the IP on the bridge) from elsewhere (so sshd
is functional).
This leaves me, I believe, with the routing between the interfaces. When a packets sent to 192.168.10.2
(bridge interface) comes through wireguard, its next hop is looked up in the routing table and it matches the line
192.168.10.0/24 dev br0 proto kernel scope link src 192.168.10.2
This is probably not good.
On the other hand, I would have expected the kernel to deal with this packet, understanding that it is intended for a local interface. Or maybe I am wrong here.
My question: what should be the network setup so that a packets which comes through the wireguard interface can reach the local bridge?