0

Each of both PVE has 1 VM for firewall and several other VMs, organized in subnetworks, addressed with RFC1918, according to the this diagram

For better understanding, this is the networking addressing:

PVE01 - Net 01 - 172.1.10.0/27
PVE01 - Net 02 - 172.1.20.0/27
PVE01 - Net 03 - 172.1.30.0/27

PVE02 - Net 01 - 172.2.10.0/27
PVE02 - Net 02 - 172.2.20.0/27
PVE02 - Net 03 - 172.2.30.0/27

Actually, any server in the structure is able to communicate with any other server into the same PVE. The goal is having any VM of Server A communicating with any VM of Server B and vice-versa. Both PVEs are already connected to the same VRack in OVH Web Manager (this is the best I could do following OVH documentation)

I want both firewalls to communicate through VRack. Anybody did such a configuration? If so, is there any documentation that can help me on how to configure both interfaces?

1 Answers1

1

Each VM can communicate with other VMs on the same host, even in different virtual networks, because the host has routes for each virtual networks managed by itself. You can use ip route on each host to see that.

You could solve that by adding static routes. On each host, you should manually add one route for each virtual networks on other hosts. If your infrastructure scales later with more hosts and virtual networks, it will not be convenient to maintain.

A better way would be a single router, physical or virtualized, and setup VLAN with openvswitch.

Dylan
  • 461
  • 2
  • 6
  • I thought about the routes, thanks for your help. Buto you have any suggestion about how to setup these routes without installing openvswitch? I want to know whether this can be done out of extra software. BTW, it is unlikely that this infrastructure would increase in the # of subnetworks. – Gilberto Martins Nov 18 '21 at 17:11
  • You said "Each VM can communicate with other VMs on the same host, even in different virtual networks, because the host has routes for each virtual networks managed by itself". Not really. The firewall has one interface to each subnetwork, and each interface is connected to a bridge of ProxMox, except one that is linked to a bridge to the external interface. I have linked each of these bridges to dummies interfaces I've configured in PVE. So in the PVE you can sniff the whole traffic, but not in the VMs. The VMs only have routes to default gateway. ``` – Gilberto Martins Nov 18 '21 at 18:10
  • Regarding your first comment: If you do not intend to increase the number of hosts or virtual networks, static routes is a good solution. Have a look at your distribution's manual or online guides. Setting static routes and make them persistent across reboots is an already covered topic. You may want to search for Debian, as Proxmox is based on this distribution. – Dylan Nov 19 '21 at 02:17
  • Reply to your second comment: I am not sure what you mean, the firewall is not responsible of routing. If one VM can reach another VM in the same PVE but a different subnet (as you stated in the initial question), this is because the VM uses the PVE as default gateway, and the PVE has a route to the other network through using the bridge as `dev`, and you will notice too that the PVE has an IP inside that virtual network. Have a look at `ip route` on the PVE. The PVE is a router between the different virtual networks, and the WAN. – Dylan Nov 19 '21 at 02:23
  • 1
    I forgot, but both your PVE must be in the same segment if you want your static routes to work because your virtual networks are not defined in OVH's routers. You will have to use the vRack, each PVE must get an IP inside the vRack in the same VLAN and in the same network (you will create another one). Let's say the vRack link is eth1 on each PVE, PVE01 gets 10.0.0.1/24 on eth1, and PVE02 gets 10.0.0.2/24 on eth1 too. Then, one route on PVE01 could look like `172.2.10.0/27 via 10.0.0.2 dev eth1` – Dylan Nov 19 '21 at 02:31