1

Just upgraded from Debian 10 to 11 and my unpriviledged container is no longer assigned an IP through the config file.

/var/lib/lxc/DNS/config

# Template used to create this container: /usr/share/lxc/templates/lxc-download
# Parameters passed to the template:
# Template script checksum (SHA-1): 273c51343604eb85f7e294c8da0a5eb769d648f3
# For additional config options, please look at lxc.container.conf(5)

# Uncomment the following line to support nesting containers:
#lxc.include = /usr/share/lxc/config/nesting.conf
# (Be aware this has security implications)


# Distribution configuration
lxc.include = /usr/share/lxc/config/common.conf
lxc.include = /usr/share/lxc/config/userns.conf
lxc.arch = linux64

# Container specific configuration
lxc.apparmor.profile = unconfined
lxc.idmap = u 0 1258512 65536
lxc.idmap = g 0 1258512 65536
lxc.rootfs.path = dir:/var/lib/lxc/DNS/rootfs
lxc.uts.name = DNS
lxc.start.auto = 1

# Network configuration
lxc.net.0.type = veth
lxc.net.0.link = br.lxc
lxc.net.0.flags = up
lxc.net.0.ipv4.address = 192.168.5.2/24
lxc.net.0.ipv4.gateway = 192.168.5.1
lxc.net.0.hwaddr = DC:A6:32:xx:xx:xx

lxc-info Doesn't show an IP. It seems like its just ignoring the config file.

Name:           DNS
State:          RUNNING
PID:            32190
Link:           vethXPVwwA
 TX bytes:      2.39 KiB
 RX bytes:      778 bytes
 Total bytes:   3.15 KiB

Theres also these random other interfaces that popped up after the upgrade (from inside the lxc):

# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
2: gre0@NONE: <NOARP> mtu 1476 qdisc noop state DOWN group default qlen 1000
    link/gre 0.0.0.0 brd 0.0.0.0
3: gretap0@NONE: <BROADCAST,MULTICAST> mtu 1462 qdisc noop state DOWN group default qlen 1000
    link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
4: erspan0@NONE: <BROADCAST,MULTICAST> mtu 1450 qdisc noop state DOWN group default qlen 1000
    link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff
5: eth0@if46: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether dc:a6:32:xx:xx:xx brd ff:ff:ff:ff:ff:ff link-netnsid 0

I'd assign the interface manually but there is no systemd-networking or /etc/network/interfaces in this container.

# ls -l /etc/network/interfaces
ls: cannot access '/etc/network/interfaces': No such file or directory

The container itself is running fine, all the services started, but the networking is just missing an IP. Assigning the IP manually to the veth inside the LXC doesn't propagate to the host (veth on the host doesn't show an IP).

I am also getting quite a lot of messages in dmesg after I add routes and IPs manually to the LXC:

[13417.386863] WARNING (unknown src intf):IN=br.lxc OUT= MAC=ff:ff:ff:ff:ff:ff:dc:a6:32:xx:xx:xx:xx:00 SRC=0.0.0.0 DST=255.255.255.255 LEN=314 TOS=0x00 PREC=0xC0 TTL=64 ID=0 PROTO=UDP SPT=68 DPT=67 LEN=294 

Its not an unknown interface, the host knows exactly where it is and has a route:

50: br.lxc: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 5e:82:0a:99:4b:fc brd ff:ff:ff:ff:ff:ff

192.168.5.0/24 dev br.lxc proto kernel scope link src 192.168.5.1 

What am I missing here? Why is the veth missing its IP? I really need some help on this one.

  • Did the veth have an IP before? I use LXC regularly with some ubuntu containers on a ubuntu host and I don't recall the veth interfaces ever having an IP -- the IP is just on the eth0 interface inside the container, and then the gateway address is on the host, which from your snippets looks fine (192.168.5.1 @ br.lxc). If you do `sudo brctl show` and see the veth interface associated with br.lxc (which should work), then you should be able to ping between host and container with the container eth0 set to 192.168.5.5. Also I don't think debian uses netplan, but you could check `/etc/netplan` – A. Trevelyan Dec 08 '22 at 03:33
  • The container used to get its own eth0 interface (inside the container) an IP. /etc/netplan doesn't exist either. If I manually add the IP inside the container then yes I can ping from the host to the container. But routed traffic accross the host to the container doesn't work. I think the root problem is that the container is no longer getting provisioned an IP. –  Dec 08 '22 at 21:01
  • I mean that would be the problem on boot yeah, but if you apply the config after its booted I don't see a good reason why it wouldn't just work. So what works vs what fails after you add the IP in the container? Is it that `VM --> Host` works, `Host --> VM` works, `VM --> non-host/internet` fails, and `non-host/internet` --> VM fails? Can you do a tcpdump on the host and send some traffic and show the output? Also show the output of `sudo iptables -t nat -L -v` on the host. – A. Trevelyan Dec 09 '22 at 04:30

0 Answers0