5

First off I have a special IPv6 address allocated to my dedicated server, just 1. A ::1/128 one. But I can assign addresses to eth0 (eg ::2/128, ::3/128, etc).

Now I would like to run LXC containers on that server but I would like them to be first class citizens, I would like them to have an own IPv6 address.

LXC with IPv4 works fine. I can start a container and from it ping the world. I have a bridge device called lxcbr0.

Quite honestly I don't know how to proceed. In the specific LXC config I have ('prefix' stands for my assigned, well, prefix):

lxc.network.ipv6 = prefix::3/128
lxc.network.ipv6.gateway = prefix::2 # iffy, not sure this is correct

On the host I have configured sysctl to use forwarding:

net.ipv6.conf.default.forwarding = 1
net.ipv6.conf.eth0.forwarding = 1

Now I'm losing track. I think I need to assign the bridge an IP. I've assigned it prefix::2/128, this I use in the LXC config above. In 'interfaces':

iface lxcbr0 inet6 static
        address prefix::2
        netmask 128
        # use arp proxy? Read that somewhere. 
        post-up /sbin/ip -6 neigh add proxy prefix::3 dev eth0 #container 1
        post-up /sbin/ip -6 neigh add proxy prefix::4 dev eth0 #container 2

Needless to say this doesn't work. I can start the container and log in but can't ping6 anything. Nor can I ping the container from the host. I know there is some business with routing...?

Some output of the current state: Host 'ip -6 a':

4: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qlen 1000
    inet6 2607:5300:60:714::1/128 scope global 
       valid_lft forever preferred_lft forever
    inet6 fe80::ea40:f2ff:feed:106f/64 scope link 
       valid_lft forever preferred_lft forever
8: lxcbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 
    inet6 2607:5300:60:714::2/128 scope global 
       valid_lft forever preferred_lft forever
    inet6 fe80::b07b:e3ff:fe33:22e7/64 scope link 
       valid_lft forever preferred_lft forever
18: vethPVJQ6M: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qlen 1000
    inet6 fe80::fcb7:57ff:fe3c:bcd1/64 scope link 
       valid_lft forever preferred_lft forever

Container 'ip -6 a':

20: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qlen 1000
    inet6 2607:5300:60:714::3/128 scope global 
       valid_lft forever preferred_lft forever
    inet6 fe80::216:3eff:fe59:679f/64 scope link 
       valid_lft forever preferred_lft forever

Host 'ip -6 r':

2607:5300:60:714::1 dev eth0  proto kernel  metric 256 
2607:5300:60:714::2 dev lxcbr0  proto kernel  metric 256 
2607:5300:60:7ff:ff:ff:ff:ff dev eth0  metric 1024 
fe80::/64 dev eth0  proto kernel  metric 256 
fe80::/64 dev lxcbr0  proto kernel  metric 256 
fe80::/64 dev vethPVJQ6M  proto kernel  metric 256 
fe80::/64 dev vethWT7OPQ  proto kernel  metric 256 
default via 2607:5300:60:7ff:ff:ff:ff:ff dev eth0  metric 1024 

Container 'ip -6 r':

2607:5300:60:714::2 dev eth0  metric 1024 
2607:5300:60:714::3 dev eth0  proto kernel  metric 256 
fe80::/64 dev eth0  proto kernel  metric 256 
default via 2607:5300:60:714::2 dev eth0  metric 1024 

The host runs Ubuntu 15.04, LXC version 1.1.2.

I would appreciate some pointers!

harm
  • 181
  • 1
  • 1
  • 11
  • Who is your provider? What service did you purchase? – Michael Hampton Jun 11 '15 at 22:05
  • Obviously this is cheap. OVH's Kimsufi (https://www.kimsufi.com/fr/index.xml). – harm Jun 12 '15 at 05:46
  • 1
    Hmm. That's going to be a bit of a problem, since OVH really doesn't do IPv6 properly. I think you can work around their mess, but I'm going to have to do some experimentation before I can give you a complete solution. (It was on my to-do list anyway...) – Michael Hampton Jun 12 '15 at 05:49
  • Wow! That would really mean a lot to me. If there is anything I can help with let me know. – harm Jun 12 '15 at 07:31
  • I suggest a two step approach for helping us help you. 1) Attempt this with Linode or another IaaS that routes a /64 to your host. You won't need the added complexity of NDP proxy. 2) Attempt this in an IPv6-enabled IaaS that does not route a /64 to your host. You'll need NDP proxy for this. – Jeff Loughridge Jun 12 '15 at 12:45

1 Answers1

2

It seems to me that you are conflating a number of different things here. First, I doubt that the net mask on your server's ethernet port is actually /128. I suspect it's something else (/64 perhaps) and that you're on a shared segment with a bunch of other customers.

Judging by the output of your "ip -6 a" command:

4: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qlen 1000
    inet6 2607:5300:60:714::1/128 scope global 
       valid_lft forever preferred_lft forever
    inet6 fe80::ea40:f2ff:feed:106f/64 scope link 
       valid_lft forever preferred_lft forever
8: lxcbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 
    inet6 2607:5300:60:714::2/128 scope global 
       valid_lft forever preferred_lft forever
    inet6 fe80::b07b:e3ff:fe33:22e7/64 scope link 
       valid_lft forever preferred_lft forever
18: vethPVJQ6M: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qlen 1000
    inet6 fe80::fcb7:57ff:fe3c:bcd1/64 scope link 
       valid_lft forever preferred_lft forever

I would say that the /128 on the interfaces is an error. Your prefix appears to be 2607:5300:60:714::/64 (most likely).

Assuming that's correct, then you'll need to set up your interfaces file as follows (add your IPv4 as needed):

auto lxcbr0
iface lxcbr0 inet6 static
  bridge_ports eth0
  bridge_fd 0
  address 2607:5300:60:714::1
  net mask 64
  gateway 2607:5300:60:7ff:ff:ff:ff:ff

Note: It's not clear how you reach 2607:5300:60:7ff::/64 to get to your default gateway. It would be very useful to know how your provider expects you to configure your network or to have a first-hand look at any documentation they provided. Best guess from here is that the 2607:5300:60:714::/64 network is on the same link as 2607:5300:60:7ff::/64. That 2607:5300:60:7ff::/64 is used for the provider's infrastructure. It's unclear whether you get the entire 2607:5300:60:714:/64 or whether that's shared with other customers on the same link.

Assuming you have the freedom to assign addresses from within that range, then all you really need to do is connect your containers to the same lxcbr0 interface and assign each container's address to that bridge interface.

Again, this is just a best guess based on the data you provided. Without knowing your provider's actual configuration, it's impossible to tell for sure.

  • Even after all these years I really like your answer! I never solved it and moved on but this certainly clarifies stuff! – harm Jan 14 '20 at 08:46
  • Thanks for the response. I knew it was an old question, but it seemed likely to still be relevant to some, even if not the original questioner. – Owen DeLong Jan 15 '20 at 13:59