0

I have a strange problem after installing a kubernetes cluster with kubespray (2.19.1) on oracle linux 8 (latest packages and vagrant image).

I use this vagrant file (got a bit ugly from time to time because of playing around with different options).

VAGRANTFILE_API_VERSION = "2"

v4prefix = "192.168.59."
v6prefix = "fddd:0:0:1ff::"
$num_instances = 6
$vm_memory ||= 4096
$vm_cpus ||= 2

Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
  config.vm.box = "generic/oracle8"
  config.ssh.insert_key = false
  
  (1..$num_instances).each do |i|

    config.vm.define vm_name = "k8s-%01d" % [i] do |node|
        node.vm.hostname = vm_name

        node.vm.provider :virtualbox do |vb|
            vb.memory = $vm_memory
            vb.cpus = $vm_cpus
            vb.gui = false
            vb.linked_clone = true
            vb.customize ["modifyvm", :id, "--vram", "8"]
            vb.customize ["modifyvm", :id, "--audio", "none"]
        end

        ip = "#{v4prefix}#{i+100}"
        ip6 = "#{v6prefix}#{i+100}"

        node.vm.network "private_network", ip: ip, auto_config: false

        node.vm.provision "shell", run: "always", inline: "iptables -P FORWARD ACCEPT"
        node.vm.provision "shell", run: "always", inline: "systemctl stop firewalld; systemctl disable firewalld"
        node.vm.provision "shell", run: "always", inline: "sudo sysctl -w net.ipv6.conf.all.disable_ipv6=0"
        node.vm.provision "shell", run: "always", inline: "sudo sysctl -w net.ipv6.conf.eth1.disable_ipv6=0"
        node.vm.provision "shell", run: "always", inline: "echo 0 > /proc/sys/net/ipv6/conf/eth1/disable_ipv6"
       
        $ip6script = <<-SCRIPT

        cat << 'EOF' > /etc/sysconfig/network-scripts/ifcfg-eth1
NM_CONTROLLED=yes
BOOTPROTO=none
ONBOOT=yes
IPADDR=#{ip}
NETMASK=255.255.255.0
DEVICE=eth1
PEERDNS=no
IPV6INIT=yes
IPV6_AUTOCONF=no
IPV6_FAILURE_FATAL=no
IPV6ADDR=#{ip6}
EOF

        systemctl restart NetworkManager.service
        nmcli device down eth1
        nmcli device up eth1
        SCRIPT

        node.vm.provision "shell", run: "always", inline: $ip6script
    end
  end
end

When I start, the generated file for eth1 looks like this:

NM_CONTROLLED=yes
BOOTPROTO=none
ONBOOT=yes
IPADDR=192.168.59.101
NETMASK=255.255.255.0
DEVICE=eth1
PEERDNS=no
IPV6INIT=yes
IPV6_AUTOCONF=no
IPV6_FAILURE_FATAL=no
IPV6ADDR=fddd:0:0:1ff::101

As soon as I login, I can see both: IPv4 and IPv6 with ifconfig.

After installing kubespray, the IPv6 address disappear and got added after a NetworkManager reload step. Later in the installation the IPv6 address again disappear.

What I also can see is: if I just do an yum update, the IPv6 address also disappear until I restart NetworkManager. Also after the kubespray installation, I restart NetworkManager manually and the IPv6 Address is again assigned and reachable.

With the IPv4 address, I have no problems.

I see this message in logs:

Dec 28 20:35:14 k8s-1 NetworkManager[32965]: <warn>  [1672259714.9875] ipv6ll[9ab386e7ef05cb1c,ifindex=3]: changed: no IPv6 link local address to retry after Duplicate Address Detection failures (back off)

Any idea where I can look? Because the scope is large, I will add needed information based on your input.

j0k4b0
  • 115
  • 5

0 Answers0