0

I have a server hosting OpenVZ containers. An IPsec tunnel is configured on the hardware node (HN) and I'd like to make the remote network available in the containers (CT). How can I do this?

The current setup is like this:

  • HN has a public address on eth0
  • HN has a private address 192.168.100.1 on the alias eth0:0
  • The remote network is 192.168.200.0/24, HN is able to ping hosts on this network
  • CT has a public address on venet0, it is reachable from the outside world and can reach external hosts
  • CT has a private address 192.168.100.101. It can ping its HN on the private address 192.168.100.1
  • No firewall is configured

CT can't reach hosts on the remote 192.168.200.0/24 network and I'm not sure how to do this. Can this be done by using a venet interface for the containers, or do I have to switch to veth? Is this a missing route on the HN? Do I have to enable some kind of NATing on the HN?

Any help will be appreciated.

UPDATE: If the CT send a ping from its private address I can see the icmp request/reply with a tcpdump on the venet0 interface on the host. I looks like the outgoing traffic is fine but the incoming traffic is blocked.

1 Answers1

3

To make the host's IPsec tunnels available to your containers, you need to run this in your container :

sysctl -w net.ipv4.conf.venet0.disable_policy=1

This will disable the IPSEC policy (SPD) checks on the VZ's interface. This needs to be adapt if veth devices are used in the container.

For more details see: