3

On an AWS EC2 instance I like to host LXC containers as kind of virtual servers. I created a bridge (br0) containing only eth0 and gave it the private ip of my VPCs subnet. I reconfigured LXC not to use lxcbr0 as bridge, but my br0 device.

When I add a new container and assign it an IP address of my VPCs subnet, I can reach the container from the lxc host. I can also reach the lxc host from within the container. But every other address can not be reached although in the same subnet.

Bridge configuration:

auto br0
iface br0 inet static

bridge_ports eth0
bridge_fd 2
bridge_stp off

address 10.8.0.11
netmask 255.255.255.0
network 10.8.0.0
broadcast 10.8.0.255
gateway 10.8.0.1
dns-nameservers 8.8.8.8 8.8.4.4

VPC NIC was set to "Disable Source/Dest. check"

ip_forwarding is set to 1

no iptables rules existent

eth0 is set to promiscuous mode (ip link set eth0 promisc on)

lxc containers are being correctly associated with my bridge

In a hardware only environment, as well as on a VirtualBox environment this setup worked. However, on AWS it does not.

devnull
  • 193
  • 5
  • I understand LXC is the basis for docker, and docker is the basis for the Amazon Elastic Container Service. Have you considered using ECS? I know this doesn't answer your question, but it's something to consider. – Tim Oct 10 '16 at 17:50
  • Unfortunately we have legacy applications which rely on a complete running os. (whole init process and services) Which is as I know not easily done in docker and it's also not the way docker is meant to be used. But in general, you're right. – devnull Oct 10 '16 at 19:11
  • 1
    *assign it an IP address of my VPCs subnet,* Are you just making up addresses, or are you associating them with the Elastic Network Interface that is attached to your instance? If the former, there's the answer... VPC is not an Ethernet LAN. Promiscuous mode probably accomplishes nothing, as well. – Michael - sqlbot Oct 10 '16 at 20:15
  • I tried adding the containers ip address to the ENI. Bridge was not working though. See answer from Nath. – devnull Oct 11 '16 at 13:33

2 Answers2

2

Bridging won't work, VPC doesn't isn't a layer 2 network and all ips need to be assign through the ec2 API. Your best bet is to use a totally separate (non-conflicting) subnet and have the host route traffic to the lxc containers Then update your VPC route tables with a static route to this subnet through your ec2 instances NIC. This is how openVPN works.

Nath
  • 1,322
  • 9
  • 10
1

According to the answer of Nath I've put the LXC containers into their own net and routed the traffic between the networks. Now it works!

devnull
  • 193
  • 5
  • Glad to know it helped – Nath Oct 11 '16 at 21:58
  • Hmm. I wonder what I'm doing wrong. Does this still work in 2023 or has AWS killed this off? I must be making a wrong turn somewhere as it ain't working for me. – AnthonyK Jul 02 '23 at 08:51
  • Finally found the issue! I had not disabled src/dst check on the interface. It was right there in your original post - :(. It took me 5 hrs to find it - dang it! – AnthonyK Jul 02 '23 at 14:31