1

I'm on Ubuntu 20.04 and am running virtual machines (KVM) locally that are attached to a bridge interface on the host. The bridge (and all VMs attached to it) are getting their IPs via DHCP from a DSL/router on the same network.

The bridged interface on the VM host looks like this:

br0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.188.22  netmask 255.255.255.0  broadcast 192.168.188.255
        inet6 fe80::2172:1869:b4cb:ec84  prefixlen 64  scopeid 0x20<link>
        inet6 2a01:c22:8c21:4200:6dd0:e662:4f46:c591  prefixlen 64  scopeid 0x0<global>
        inet6 2a01:c22:8c21:4200:8d92:1ea5:3c93:3668  prefixlen 64  scopeid 0x0<global>
        ether 00:d8:61:9d:ad:c5  txqueuelen 1000  (Ethernet)
        RX packets 1512101  bytes 2026998740 (2.0 GB)
        RX errors 0  dropped 12289  overruns 0  frame 0
        TX packets 849612  bytes 1582945488 (1.5 GB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

I've enabled IP forwarding on the host and configured the VMs to use the host as the gateway.

This is how the routes look like inside a VM:

# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         192.168.188.22  0.0.0.0         UG    0      0        0 eth0
192.168.188.0   0.0.0.0         255.255.255.0   U     100    0        0 eth0

Routes on the VM host:

Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         192.168.188.1   0.0.0.0         UG    425    0        0 br0
10.8.0.1        10.8.0.17       255.255.255.255 UGH   50     0        0 tun0
10.8.0.17       0.0.0.0         255.255.255.255 UH    50     0        0 tun0
172.28.52.0     10.8.0.17       255.255.255.0   UG    50     0        0 tun0
192.168.4.0     10.8.0.17       255.255.255.0   UG    50     0        0 tun0
192.168.5.0     10.8.0.17       255.255.255.0   UG    50     0        0 tun0
192.168.10.0    10.8.0.17       255.255.255.0   UG    50     0        0 tun0
192.168.50.0    10.8.0.17       255.255.255.0   UG    50     0        0 tun0
192.168.100.0   10.8.0.17       255.255.255.0   UG    50     0        0 tun0
192.168.188.0   0.0.0.0         255.255.255.0   U     425    0        0 br0
192.168.188.1   0.0.0.0         255.255.255.255 UH    425    0        0 br0
213.238.34.194  192.168.188.1   255.255.255.255 UGH   425    0        0 br0
213.238.34.212  10.8.0.17       255.255.255.255 UGH   50     0        0 tun0

Accessing IPs on the 192.168.188.0/24 network and public internet from inside the virtual machines works fine but I can't seem to figure out how to route traffic from inside the VMs to any of the IPs/networks that are reachable through the "tun0" interface on the VM host itself.

/proc/sys/net/ipv4/ip_forward is set to "1" , I've manually (using iptables -F) flushed all firewall rules from all tables/chains to avoid any interference ... what more do I need to be able to do (for example) "ping 192.168.50.2" from inside one of the virtual machines ?

This is what I captured by running "tcpdump -i br0 host " on the VM host while trying to access 192.168.50.212 (one of the machines on the "tun0" network) from inside one of the VMs:

enter image description here

How can I get the VMs attached to the local "br0" interface to also have connectivity with the networks accessible only through the local "tun0" device ?

  • 2
    1) Don't use `ifconfig`, `route`, `brctl` and so on. Their output is verbose, but that additional information is misleading and obscures important details, in particular the `route` output. Use `ip addr`, `ip route` instead, and get a general habit to use `iproute2` to work with Linux network stack instead of ancient `net-tools` style. 2) Please, add a filtered textual output of `tcpdump -vvnr` of this file instead of WireShark screenshot. 3) Run a `tcpdump` on the `br0` and `tun0` and on the target simultaneously, to see which packets get forwarded and which don't and which ones are answered. – Nikita Kipriyanov Feb 25 '21 at 09:43
  • If the bridge was really working only as a bridge, the VM's gateway would also be 192.168.188.1, like the host, and not the host's 192.168.188.22 address. So you have a strange configuration that can only add more problems. – A.B Apr 08 '21 at 18:21
  • This is not a "strange" configuration. My DHCP server/internet router (Fritzbox) does not support OpenVPN so I have to use my desktop machine (the same that is also running the VMs) for VPN tunneling. That's why the VMs use my desktop machine's ip as their default gw and the reason I want to forward traffic from br0 to tun0. The bridge setup is the standard solution for having VMs on the same subnet as the VM host, see https://serverfault.com/questions/1075408/can-i-have-my-kvm-guests-on-the-same-subnet-as-the-host or one of the myriad other websites. – Tobias Gierke Oct 20 '22 at 08:48

1 Answers1

1

After ignoring the problem for a long time (I found another workaround) I finally forced myself to have a go at this one again.

Turns out the fix was surprisingly simple: Just enabling masquerading on the tun0 interface did the trick.

#!/bin/bash

iptables -A FORWARD -i br0 -o tun0 -m state --state ESTABLISHED,RELATED -j ACCEPT
iptables -A FORWARD -i br0 -o tun0 -j ACCEPT
iptables -t nat -A POSTROUTING -o tun0 -j MASQUERADE

This obviously only works correctly if you have enabled routing on the VM host and have the VMs use the VM host's IP as their default gateway (or one would need to add static routes on the network's "real" default gateway and for each of the subnets on the other side of the VPN tunnel , have those point to the VM host).