2

Note 1: as far as I understand the suggested MACVLAN architecture, I cannot use two physical network interfaces with the same MACVLAN. However, in my application I need to have a single LAN/L2 domain spanning an eth0 LAN, as well as a wlan0 AP-mode LAN.

Note 2: MACVLAN in Bridge Mode has another show stopper for my use case; when the physical interface goes down, so go down all sub-interfaces. My network function containers then would be unable to do their work at all, which is bad, as the need to communicate with each other, not only to the external LAN.

My situation: for a project I have a set of Docker containers that work as full-blown IP nodes, especially when it comes to IPv6. These containers are to be wired up to a Docker "bridge" network inside the host, say br0, using a stock Linux kernel Ethernet bridge. This bridge br0 will get a direct port to the outside LAN via an enslaved host network interface, say eth0 (host!). In addition, br0 has an (optional) wlan0 AP enslaved port. N.B: It's okay that the host will be reachable via br0.

Now I want to create this Docker bridge network without any NAT/masquerading going from br0 to any of the host's (other) network interfaces. And I don't want and don't need any DHCPv4 server and DNS proxy getting installed on the br0 network. In fact, as br0 has an enslaved eth0 and wlan0 to the outside LAN, there must not get any auxiliary Docker network services instantiated.

How can I tell Docker to create a plain simple bridge network without any IP address management, without DNS services, and without NAT? Is this even possible using only the stock bridge network driver?

TheDiveO
  • 561
  • 1
  • 6
  • 17
  • Doesn't look like there's an easy way to do this, but this might be a workaround: https://raesene.github.io/blog/2016/02/07/Exploration-in-Docker-bridging/ –  Oct 06 '17 at 20:53
  • Interesting reads, but from the Docker documentation the macvlan driver isn't what I need here, as I need a normal bridge, not VLAN-based demuxing which macvlan seems to be created for (if I'm not mistaken). – TheDiveO Oct 06 '17 at 21:21
  • Another critical point is that I cannot start the containers before the bridged network is set up and the container-facing network interfaces have been linked into their container namespaces. – TheDiveO Oct 06 '17 at 21:23
  • If I understand correctly, each container will need a unique MAC address to communicate on the outside LAN. –  Oct 06 '17 at 22:02
  • The interfaces need the MACs, not the containers. Also not a big deal. – TheDiveO Oct 07 '17 at 06:32
  • https://docs.docker.com/engine/userguide/networking/get-started-macvlan/#macvlan-bridge-mode-example-usage –  Oct 07 '17 at 22:27
  • The macvlan driver doesn't help in my case. I'm now looking along network=none, creating the necessary veth peers myself and moving them into the container network namespaces. – TheDiveO Oct 08 '17 at 06:14
  • @Ben I've read more about the MACVLAN architecture and it works with a single interface only. I've updated my question because I forgot to mention that the existing bridge spans two physical interfaces, eth0 and wlan0. Do you know if the parent/upper interface of a MACVLAN can be enslaved into a bridge? Only then I could use MACVLAN in sort of a cascade. – TheDiveO Oct 09 '17 at 17:25
  • @Ben Another MACVLAN problem for me: if eth0 goes down, all sub-interfaces go down too. That's bad for my system, where the containers still need to communicate with each other while this LAN is down. – TheDiveO Oct 09 '17 at 17:41
  • Looks as if this requires writing a new Docker network plugin that set ups the bridge, enslaves the specified HW interfaces, and otherwise adds VETH pairs when new containers (network sandboxes) need to be attached. Similar to the existing build-in bridge driver, but with the additional enslaving of the HW interfaces and without creating forwarding/NAT packet filter rules. – TheDiveO Jun 30 '23 at 14:37

0 Answers0