1

Hi I have two virtual machine in a local server with ubuntu 20.04 and i want to build a small cluster for my microservices. I ran the following step to setup my cluster but I got issue with calico-nodes. They are running with 0/1/

master.domain.com

  • ubuntu 20.04
  • docker --version = Docker version 20.10.7, build f0df350
  • kubectl version = Client Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.4", GitCommit:"e87da0bd6e03ec3fea7933c4b5263d151aafd07c", GitTreeState:"clean", BuildDate:"2021-02-18T16:12:00Z", GoVersion:"go1.15.8", Compiler:"gc", Platform:"linux/amd64"}

worker.domain.com

  • ubuntu 20.04
  • docker --version = Docker version 20.10.2, build 20.10.2-0ubuntu1~20.04.2
  • kubectl version = Client Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.4", GitCommit:"e87da0bd6e03ec3fea7933c4b5263d151aafd07c", GitTreeState:"clean", BuildDate:"2021-02-18T16:12:00Z", GoVersion:"go1.15.8", Compiler:"gc", Platform:"linux/amd64"}

STEP-1

In the master.domain.com virtual machine I run the following commands

sudo kubeadm init --pod-network-cidr=192.168.0.0/16
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
curl https://docs.projectcalico.org/manifests/calico.yaml -O
kubectl apply -f calico.yaml
kubectl get pods --all-namespaces -o wide
NAMESPACE     NAME                                       READY   STATUS    RESTARTS   AGE     IP               NODE    NOMINATED NODE   READINESS GATES
kube-system   calico-kube-controllers-7f4f5bf95d-gnll8   1/1     Running   0          38s     192.168.29.195   master   <none>           <none>
kube-system   calico-node-7zmtm                          1/1     Running   0          38s     195.251.3.255    master   <none>           <none>
kube-system   coredns-74ff55c5b-ltn9g                    1/1     Running   0          3m49s   192.168.29.193   master   <none>           <none>
kube-system   coredns-74ff55c5b-nkhzf                    1/1     Running   0          3m49s   192.168.29.194   master   <none>           <none>
kube-system   etcd-kubem                                 1/1     Running   0          4m6s    195.251.3.255    master   <none>           <none>
kube-system   kube-apiserver-kubem                       1/1     Running   0          4m6s    195.251.3.255    master   <none>           <none>
kube-system   kube-controller-manager-kubem              1/1     Running   0          4m6s    195.251.3.255    master   <none>           <none>
kube-system   kube-proxy-2cr2x                           1/1     Running   0          3m49s   195.251.3.255    master   <none>           <none>
kube-system   kube-scheduler-kubem                       1/1     Running   0          4m6s    195.251.3.255    master   <none>           <none>

STEP-2 In the worker.domain.com virtual machine I run the following commands

sudo kubeadm join 195.251.3.255:6443 --token azuist.xxxxxxxxxxx  --discovery-token-ca-cert-hash sha256:xxxxxxxxxxxxxxxxxxxxxxxxxxxx

STEP-3 In the master.domain.com virtual machine I run the following commands

kubectl get pods --all-namespaces -o wide
NAMESPACE     NAME                                       READY   STATUS    RESTARTS   AGE     IP               NODE    NOMINATED NODE   READINESS GATES
kube-system   calico-kube-controllers-7f4f5bf95d-gnll8   1/1     Running   0          6m37s   192.168.29.195   master   <none>           <none>
kube-system   calico-node-7zmtm                          0/1     Running   0          6m37s   195.251.3.255    master   <none>           <none>
kube-system   calico-node-wccnb                          0/1     Running   0          2m19s   195.251.3.230    worker   <none>           <none>
kube-system   coredns-74ff55c5b-ltn9g                    1/1     Running   0          9m48s   192.168.29.193   master   <none>           <none>
kube-system   coredns-74ff55c5b-nkhzf                    1/1     Running   0          9m48s   192.168.29.194   master   <none>           <none>
kube-system   etcd-kubem                                 1/1     Running   0          10m     195.251.3.255    master   <none>           <none>
kube-system   kube-apiserver-kubem                       1/1     Running   0          10m     195.251.3.255    master   <none>           <none>
kube-system   kube-controller-manager-kubem              1/1     Running   0          10m     195.251.3.255    master   <none>           <none>
kube-system   kube-proxy-2cr2x                           1/1     Running   0          9m48s   195.251.3.255    master   <none>           <none>
kube-system   kube-proxy-kxw4m                           1/1     Running   0          2m19s   195.251.3.230    worker   <none>           <none>
kube-system   kube-scheduler-kubem                       1/1     Running   0          10m     195.251.3.255    master   <none>           <none>

kubectl logs -n kube-system calico-node-7zmtm
...
...
2021-06-20 17:10:25.064 [INFO][56] monitor-addresses/startup.go 774: Using autodetected IPv4 address on interface eth0: 195.251.3.255/24
2021-06-20 17:10:34.862 [INFO][53] felix/summary.go 100: Summarising 11 dataplane reconciliation loops over 1m3.5s: avg=4ms longest=13ms ()
kubectl logs -n kube-system calico-node-wccnb
...
...
2021-06-20 17:10:59.818 [INFO][55] felix/summary.go 100: Summarising 8 dataplane reconciliation loops over 1m3.6s: avg=3ms longest=13ms (resync-filter-v4,resync-nat-v4,resync-raw-v4)
2021-06-20 17:11:05.994 [INFO][51] monitor-addresses/startup.go 774: Using autodetected IPv4 address on interface br-9a88318dda68: 172.21.0.1/16

As you can see for both calico nodes I get 0/1 running, Why??

Any idea how to solve this problem?

Thank you

ki_ha1984
  • 31
  • 1
  • 4
  • Hi, could you tell more about the setup you're having (network wise) and the exact steps that led you to this outcome? Have you followed the [kubeadm prerequisites](https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/). I'd reckon the issue could be related to the IP addresses. Your master node according to CIDR notation should be reserved for the broadcast address in this particular subnet. – Dawid Kruk Jun 21 '21 at 15:24

2 Answers2

0

Got totally the same issue.

  • CentOS 8
  • kubectl kubeadm kubelet v1.22.3
  • docker-ce version 20.10.9

The only difference worth mention is that I have to comment line

- --port=0

in /etc/kubernetes/manifests/kube-scheduler.yaml or otherwise scheduler declared as unhealthy in

kubectl get componentstatuses

Kubernetes API is advertised on a public IP address. Public IP address of control panel node is substituted with 42.42.42.42 in kubectl print-out; Public IP address of worker node is substituted with 21.21.21.21 Public domain name (which is also a hostname on Control Panel node) is substituted with public-domain.work

>kubectl get pods -n kube-system -o wide

NAME                                           READY   STATUS    RESTARTS      AGE   IP                NODE                  NOMINATED NODE   READINESS GATES
calico-kube-controllers-5d995d45d6-rk9cq       1/1     Running   0             76m   192.168.231.193   public-domain.work         <none>           <none>
calico-node-qstxm                              0/1     Running   0             76m   42.42.42.42       public-domain.work         <none>           <none>
calico-node-zmz5s                              0/1     Running   0             75m   21.21.21.21       node1.public-domain.work   <none>           <none>
coredns-78fcd69978-5xsb2                       1/1     Running   0             81m   192.168.231.194   public-domain.work         <none>           <none>
coredns-78fcd69978-q29fn                       1/1     Running   0             81m   192.168.231.195   public-domain.work         <none>           <none>
etcd-public-domain.work                        1/1     Running   3             82m   42.42.42.42       public-domain.work         <none>           <none>
kube-apiserver-public-domain.work              1/1     Running   3             82m   42.42.42.42       public-domain.work         <none>           <none>
kube-controller-manager-public-domain.work     1/1     Running   2             82m   42.42.42.42       public-domain.work         <none>           <none>
kube-proxy-5kkks                               1/1     Running   0             81m   42.42.42.42       public-domain.work         <none>           <none>
kube-proxy-xsc66                               1/1     Running   0             75m   21.21.21.21       node1.public-domain.work   <none>           <none>
kube-scheduler-public-domain.work              1/1     Running   1 (78m ago)   78m   42.42.42.42       public-domain.work         <none>           <none>
>kubectl get nodes -o wide

NAME                       STATUS   ROLES                  AGE     VERSION   INTERNAL-IP      EXTERNAL-IP   OS-IMAGE          KERNEL-VERSION   CONTAINER-RUNTIME
public-domain.work         Ready    control-plane,master   4h56m   v1.22.3   42.42.42.42      <none>        CentOS Stream 8   4.18.0-348.el8.x86_64   docker://20.10.9
node1.public-domain.work   Ready    <none>                 4h50m   v1.22.3   21.21.21.21      <none>        CentOS Stream 8   4.18.0-348.el8.x86_64   docker://20.10.10
>kubectl logs -n kube-system calico-node-qstxm

2021-11-09 15:27:38.996 [INFO][86] felix/int_dataplane.go 1539: Received interface addresses update msg=&intdataplane.ifaceAddrsUpdate{Name:"eth1", Addrs:set.mapSet{}}
2021-11-09 15:27:38.996 [INFO][86] felix/hostip_mgr.go 85: Interface addrs changed. update=&intdataplane.ifaceAddrsUpdate{Name:"eth1", Addrs:set.mapSet{}}
2021-11-09 15:27:38.997 [INFO][86] felix/ipsets.go 130: Queueing IP set for creation family="inet" setID="this-host" setType="hash:ip"
2021-11-09 15:27:38.998 [INFO][86] felix/ipsets.go 785: Doing full IP set rewrite family="inet" numMembersInPendingReplace=7 setID="this-host"

2021-11-09 15:27:40.198 [INFO][86] felix/iface_monitor.go 201: Netlink address update. addr="here:is:some:ipv6:address:that:has:nothing:to:do:with:my:control:panel:server:public:ipv6" exists=true ifIndex=3 2021-11-09 15:27:40.198 [INFO][86] felix/int_dataplane.go 1071: Linux interface addrs changed. addrs=set.mapSet{"fe80::9132:a0df:82d8:e26c":set.empty{}} ifaceName="eth1"
2021-11-09 15:27:40.198 [INFO][86] felix/int_dataplane.go 1539: Received interface addresses update msg=&intdataplane.ifaceAddrsUpdate{Name:"eth1", Addrs:set.mapSet{"here:is:some:ipv6:address:that:has:nothing:to:do:with:my:control:panel:server:public:ipv6":set.empty{}}}
2021-11-09 15:27:40.199 [INFO][86] felix/hostip_mgr.go 85: Interface addrs changed. update=&intdataplane.ifaceAddrsUpdate{Name:"eth1", Addrs:set.mapSet{"here:is:some:ipv6:address:that:has:nothing:to:do:with:my:control:panel:server:public:ipv6":set.empty{}}}
2021-11-09 15:27:40.199 [INFO][86] felix/ipsets.go 130: Queueing IP set for creation family="inet" setID="this-host" setType="hash:ip"
2021-11-09 15:27:40.200 [INFO][86] felix/ipsets.go 785: Doing full IP set rewrite family="inet" numMembersInPendingReplace=7 setID="this-host"

2021-11-09 15:27:48.010 [INFO][81] monitor-addresses/startup.go 713: Using autodetected IPv4 address on interface eth0: 42.42.42.42/24
> kube-system calico-node-zmz5s

2021-11-09 15:25:56.669 [INFO][64] felix/int_dataplane.go 1071: Linux interface addrs changed. addrs=set.mapSet{} ifaceName="eth1"
2021-11-09 15:25:56.669 [INFO][64] felix/int_dataplane.go 1539: Received interface addresses update msg=&intdataplane.ifaceAddrsUpdate{Name:"eth1", Addrs:set.mapSet{}}
2021-11-09 15:25:56.669 [INFO][64] felix/hostip_mgr.go 85: Interface addrs changed. update=&intdataplane.ifaceAddrsUpdate{Name:"eth1", Addrs:set.mapSet{}}
2021-11-09 15:25:56.669 [INFO][64] felix/ipsets.go 130: Queueing IP set for creation family="inet" setID="this-host" setType="hash:ip"
2021-11-09 15:25:56.670 [INFO][64] felix/ipsets.go 785: Doing full IP set rewrite family="inet" numMembersInPendingReplace=7 setID="this-host"

2021-11-09 15:25:56.769 [INFO][64] felix/iface_monitor.go 201: Netlink address update. addr="here:is:some:ipv6:address:that:has:nothing:to:do:with:my:worknode:server:public:ipv6" exists=false ifIndex=3

2021-11-09 15:26:07.050 [INFO][64] felix/summary.go 100: Summarising 14 dataplane reconciliation loops over 1m1.7s: avg=5ms longest=11ms ()
2021-11-09 15:26:33.880 [INFO][59] monitor-addresses/startup.go 713: Using autodetected IPv4 address on interface eth0: 21.21.21.21/24
registered
  • 50
  • 1
  • 6
  • This does not really answer the question. If you have a different question, you can ask it by clicking [Ask Question](https://stackoverflow.com/questions/ask). To get notified when this question gets new answers, you can [follow this question](https://meta.stackexchange.com/q/345661). Once you have enough [reputation](https://stackoverflow.com/help/whats-reputation), you can also [add a bounty](https://stackoverflow.com/help/privileges/set-bounties) to draw more attention to this question. - [From Review](/review/late-answers/30298071) – mbuechmann Nov 10 '21 at 14:02
0

Seemed that issue was in closed BGP port due to firewall.

This commands on master node solved it for me:

>firewall-cmd --add-port 179/tcp --zone=public --permanent

>firewall-cmd --reload
registered
  • 50
  • 1
  • 6