On an AWS EC2 instance I like to host LXC containers as kind of virtual servers. I created a bridge (br0) containing only eth0 and gave it the private ip of my VPCs subnet. I reconfigured LXC not to use lxcbr0 as bridge, but my br0 device.
When I add a new container and assign it an IP address of my VPCs subnet, I can reach the container from the lxc host. I can also reach the lxc host from within the container. But every other address can not be reached although in the same subnet.
Bridge configuration:
auto br0
iface br0 inet static
bridge_ports eth0
bridge_fd 2
bridge_stp off
address 10.8.0.11
netmask 255.255.255.0
network 10.8.0.0
broadcast 10.8.0.255
gateway 10.8.0.1
dns-nameservers 8.8.8.8 8.8.4.4
VPC NIC was set to "Disable Source/Dest. check"
ip_forwarding is set to 1
no iptables rules existent
eth0 is set to promiscuous mode (ip link set eth0 promisc on)
lxc containers are being correctly associated with my bridge
In a hardware only environment, as well as on a VirtualBox environment this setup worked. However, on AWS it does not.