0

We have a server connected to 2 switches via two NICs. On each NIC are 2 VLANs, management and production. Right now we only have one switch connected, so haven't setup the spanning tree etc.

We have LXC installed, and want to bridge (rather than NAT) the XLC containers (so they are on the same subnet as the host).

When we try to create a bridge in /etc/network/interfaces on the host ubuntu server, the networking fails to start, and we have to go to the console, remove the edits and reboot (lucky we have LOM cards!)

interfaces file:

auto em1.3
iface em1.3 inet manual
bond-master bond2
bond-primary em1.3

auto em2.3
iface em2.3 inet manual
bond-master bond2

auto bond2 #Production VLAN
iface bond2 inet static
address 10.100.100.10
netmask 255.255.255.0
gateway 10.100.100.1
dns-nameservers 10.100.10.1
bond-slaves em1.3, em2.3
bond-miimon 100
bond-mode active-backup
dns-nameservers 10.100.100.1

auto br_prod
iface br_prod inet dhcp
   bridge_ports bond2
   bridge_fd 0
   bridge_maxwait 0

When we add that last section (br_prod) the server wont start networking, and we have to use the console. It says "waiting another 60 seconds for networking to start", but doesn't.

I also tried adding

pre-up ifup bond2
post-down ifup bond2

Tried making it manual.

Tried making it static rather than DHCP, supplying appropriate ip/gateway/netmask. No luck.

Tried naming it br2 instead of br_prod, tried pre_up post_down, bridge-ports etc. We tried every combination of options, switches and underscores vs dashes. Always same effect - networking wont start (no errors).

Any ideas?

UPDATE 1

From the answer from electrometro below, I tried this:

auto bond1
iface bond1 inet static
  address 10.30.30.10
  netmask 255.255.255.0
  #bond-slaves em1.2, em2.2
  bond-slaves none
  bond-miimon 100
  bond-mode active-backup
  up route add -net .....

auto em1.2
iface em1.2 inet manual
  bond-master bond1
  bond-primary em1.2

auto em2.2
iface em2.2 inet manual
    bond-master bond1
    bond-primary em1.2

br1
iface br1 inet manual
   bridge_ports bond1
   bridge_fd 0
   bridge_maxwait 0

But get the same problem - networking doesn't start.

UPDATE 2

Thanks for the contribution by Oliver. I tried this config, and the networking comes up, I can use ifconf to see the interfaces, but I cant ssh as the routing is not working. basically I cant ping the default gateway using the manually added route.

auto em1.2
iface em1.2 inet manual

auto em2.2
iface em2.2 inet namual

auto bond1
iface bond1 inet manual
   bond-slaves em1.2 em2.2
   bond-mode active-backup

auto br10
iface br10 inet static
    address 10.30.30.10
    netmask 255.255.255.0
    bridge_ports bond1
    up route add -net 10.242.1.0/24 gw 10.30.30.1 dev bond1 # also tried dev br10

The reason we are manually setting a gateway, is that we have to networks defined: production and management. We have 2 interfaces, each connected to a switch. Each interface carries fail over for both networks, and the production network has the default gateway. I am now just trying to get a bridge on the management network as a start.

UPDATE 3

In a long line of trial and error I also tried specifying the VLAN:

auto em1.2
iface em1.2 inet manual

auto em2.2
iface em2.2 inet manual

auto bond1
iface bond1 inet manual
    bond-slaves em1.2 em2.2
    bond-mode active-backup

auto br10.2
iface br10.2 inet static
    address 10.30.30.10
    netmask 255.255.255.0
    bridge_ports bond1
    up route add -net 10.242.1.0/24 gw 10.30.30.1 dev br10.2
eos
  • 551
  • 4
  • 10
  • 27
  • Just a question: do you have the ifenslave, bridge-utils and vlan packages installed? – Oliver Mar 29 '16 at 20:08
  • @oliver, yes, all 3 of those packages are installed according to dpkg --get-selections. It works until we define any bridge on a bond, then networking wont start. – eos Mar 29 '16 at 22:02
  • I have revised my answer after testing in a Ubuntu VM. – Oliver Mar 30 '16 at 13:52
  • Why do you insist on specifying the VLAN on the physical interfaces? Take the physical interfaces (without VLAN tag), bond them together and then add the VLAN tags to the bonding interface. – Oliver Mar 31 '16 at 08:13
  • Hi Oliver, because we have multiple vlans on the same physical interfaces. What i show here is the management network, the same pair of physical interfaces are bonded on the production network. It took the engineers many days to get that part working. – eos Mar 31 '16 at 12:03
  • Ok then, have you tried changing it to the way I suggested? Bond the physical interfaces without the VLANs and add the VLAN to the bonded interface. – Oliver Mar 31 '16 at 12:06

2 Answers2

1

Here is a similar setup that is working for a docker host. Hope it points you in the right direction.

# Interface bond_lan
auto bond_lan
iface bond_lan inet manual
    slaves none
    bond-mode active-backup
    bond-miimon 100

# Interface bridge_lan
auto bridge_lan
iface bridge_lan inet static
    address 10.10.10.129
    netmask 255.255.0.0
    gateway 10.10.0.1
    bridge_ports bond_lan
    bridge_stp on
    bridge_fd 0
    bridge_maxwait 0

# Interface em1
auto em1
iface em1 inet manual
    bond-master bond_lan
    bond-primary em1

# Interface em2
auto em2
iface em2 inet manual
    bond-master bond_lan
    bond-primary em1

# Interface lo
auto lo
iface lo inet static
    address 127.0.0.1
    netmask 255.0.0.0
Jared Mackey
  • 141
  • 1
  • 7
1

Tested the following in an Ubuntu VM:

  • Create an active-backup bonding interface with the physical interfaces em1 and em2:

    auto bond0
    iface bond0 inet manual
        bond-slaves none
        bond-mode active-backup
    
    auto em1
    iface em1 inet manual
        bond-master bond0
    
    auto em2
    iface em2 inet manual
        bond-master bond0
    
  • Create a first bridge which can be also used for managing the machine on the specified IP address. We use VLAN 100 for this:

    auto br100
    iface br100 inet static
        address 10.100.100.10
        netmask 255.255.255.0
        gateway 10.100.100.1
        bridge_ports bond0.100
    
  • Create a second bridge used for the production traffic, we assume this is VLAN 200:

    auto br200
    iface br200 inet manual
        bridge_ports bond0.200
    

You can now add your containers or VMs to br100 and/or br200, depending on your needs.

Updated: Changed the way bond0 is created. Rather than referencing the physical interfaces from the bond master, the physical interfaces point to the bond master now.

Oliver
  • 5,973
  • 24
  • 33