0

How can I confirm whether or not some of the pods in this Kubernetes cluster are running inside the Calico overlay network?


Pod Names:

Specifically, when I run kubectl get pods --all-namespaces, only two of the nodes in the resulting list have the word calico in their names. The other pods, like etcd and kube-controller-manager, and others do NOT have the word calico in their names. From what I read online, the other pods should have the word calico in their names.

$ kubectl get pods --all-namespaces  

NAMESPACE     NAME                                                               READY   STATUS              RESTARTS   AGE  
kube-system   calico-node-l6jd2                                                  1/2     Running             0          51m  
kube-system   calico-node-wvtzf                                                  1/2     Running             0          51m  
kube-system   coredns-86c58d9df4-44mpn                                           0/1     ContainerCreating   0          40m  
kube-system   coredns-86c58d9df4-j5h7k                                           0/1     ContainerCreating   0          40m  
kube-system   etcd-ip-10-0-0-128.us-west-2.compute.internal                      1/1     Running             0          50m  
kube-system   kube-apiserver-ip-10-0-0-128.us-west-2.compute.internal            1/1     Running             0          51m  
kube-system   kube-controller-manager-ip-10-0-0-128.us-west-2.compute.internal   1/1     Running             0          51m  
kube-system   kube-proxy-dqmb5                                                   1/1     Running             0          51m  
kube-system   kube-proxy-jk7tl                                                   1/1     Running             0          51m  
kube-system   kube-scheduler-ip-10-0-0-128.us-west-2.compute.internal            1/1     Running             0          51m  


stdout from applying calico

The stdout that resulted from applying calico is as follows:

$ sudo kubectl apply -f https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/kubernetes-datastore/calico-networking/1.7/calico.yaml  

configmap/calico-config created  
service/calico-typha created  
deployment.apps/calico-typha created  
poddisruptionbudget.policy/calico-typha created  
daemonset.extensions/calico-node created\nserviceaccount/calico-node created  
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created  
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created  
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created  
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created  
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created  
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created  
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created  
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created  
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created  


How the cluster was created:

The commands that installed the cluster are:

$ sudo -i 
# kubeadm init --kubernetes-version 1.13.1 --pod-network-cidr 192.168.0.0/16 | tee kubeadm-init.out
# exit 
$ sudo mkdir -p $HOME/.kube
$ sudo chown -R lnxcfg:lnxcfg /etc/kubernetes
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config 
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config 
$ sudo kubectl apply -f https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/kubernetes-datastore/calico-networking/1.7/calico.yaml
$ sudo kubectl apply -f https://docs.projectcalico.org/v3.1/getting-started/kubernetes/installation/hosted/rbac-kdd.yaml  

This is running on AWS in Amazon Linux 2 host machines.

yivi
  • 42,438
  • 18
  • 116
  • 138
CodeMed
  • 9,527
  • 70
  • 212
  • 364
  • Note that ContainerCreating persists in the OP even after the pods are deleted. Google research indicates that this is often due to Calico installation problems, to which this OP also points. – CodeMed Mar 12 '19 at 03:28

3 Answers3

0

as per the official docs : (https://docs.projectcalico.org/v3.6/getting-started/kubernetes/) it looks fine. It contains further commands to activate and also check out the demo on the frontpage which shows some verifications

NAMESPACE    NAME                                       READY  STATUS   RESTARTS  AGE
kube-system  calico-kube-controllers-6ff88bf6d4-tgtzb   1/1    Running  0         2m45s
kube-system  calico-node-24h85                          2/2    Running  0         2m43s
kube-system  coredns-846jhw23g9-9af73                   1/1    Running  0         4m5s
kube-system  coredns-846jhw23g9-hmswk                   1/1    Running  0         4m5s
kube-system  etcd-jbaker-1                              1/1    Running  0         6m22s
kube-system  kube-apiserver-jbaker-1                    1/1    Running  0         6m12s
kube-system  kube-controller-manager-jbaker-1           1/1    Running  0         6m16s
kube-system  kube-proxy-8fzp2                           1/1    Running  0         5m16s
kube-system  kube-scheduler-jbaker-1                    1/1    Running  0         5m41s
jcuypers
  • 1,774
  • 2
  • 14
  • 23
  • Note that `ContainerCreating` persists in the OP even after the pods are deleted. – CodeMed Mar 12 '19 at 01:38
  • But according to me coredns isnt part of calico. can you undo the calico install and check the status before you start? – jcuypers Mar 12 '19 at 07:54
0

Could you please let me where you found the literature mentioning that other pods would also have the calico name in them?

As far as I know, in the kube-system namespace, the scheduler, api server, controller and the proxy are provided by native kubernetes, hence the naming convention doesn't have any calico in them.

And one more thing, calico applies to the PODs you create for the actual applications you wish to run on k8s, not to the kubernetes control plane.

Are you facing any problem with the cluster creation? Then the question would be different.

Hope this helps.

Jonathan_M
  • 111
  • 1
  • 6
  • I found it in the CNCF's official training course for the Certified Kubernetes Administrator. To me, that seemed official. Note that ContainerCreating persists in the OP even after the pods are deleted. Google research indicates that this is often due to Calico installation problems, to which this OP also points. – CodeMed Mar 12 '19 at 03:28
0

This is normal and expected behavior, you have only a few pods starting with Calico. They are created when you initialize Calico or add new nodes to your cluster.

etcd-*, kube-apiserver-*, kube-controller-manager-*, coredns-*, kube-proxy-*, kube-scheduler-* are mandatory system components, pods have no dependency on Calico. Hence names would be system based.

Also, as @Jonathan_M already wrote - Calico doesn't apply to K8s control plane. Only to newly created pods

You could verify whether your pods inside network overlay or not by using kubectl get pods --all-namespaces -o wide

My example:

kubectl get pods --all-namespaces -o wide
NAMESPACE     NAME                                    READY   STATUS    RESTARTS   AGE   IP            NODE            NOMINATED NODE   READINESS GATES
default       my-nginx-76bf4969df-4fwgt               1/1     Running   0          14s   192.168.1.3   kube-calico-2   <none>           <none>
default       my-nginx-76bf4969df-h9w9p               1/1     Running   0          14s   192.168.1.5   kube-calico-2   <none>           <none>
default       my-nginx-76bf4969df-mh46v               1/1     Running   0          14s   192.168.1.4   kube-calico-2   <none>           <none>
kube-system   calico-node-2b8rx                       2/2     Running   0          70m   10.132.0.12   kube-calico-1   <none>           <none>
kube-system   calico-node-q5n2s                       2/2     Running   0          60m   10.132.0.13   kube-calico-2   <none>           <none>
kube-system   coredns-86c58d9df4-q22lx                1/1     Running   0          74m   192.168.0.2   kube-calico-1   <none>           <none>
kube-system   coredns-86c58d9df4-q8nmt                1/1     Running   0          74m   192.168.1.2   kube-calico-2   <none>           <none>
kube-system   etcd-kube-calico-1                      1/1     Running   0          73m   10.132.0.12   kube-calico-1   <none>           <none>
kube-system   kube-apiserver-kube-calico-1            1/1     Running   0          73m   10.132.0.12   kube-calico-1   <none>           <none>
kube-system   kube-controller-manager-kube-calico-1   1/1     Running   0          73m   10.132.0.12   kube-calico-1   <none>           <none>
kube-system   kube-proxy-6zsxc                        1/1     Running   0          74m   10.132.0.12   kube-calico-1   <none>           <none>
kube-system   kube-proxy-97xsf                        1/1     Running   0          60m   10.132.0.13   kube-calico-2   <none>           <none>
kube-system   kube-scheduler-kube-calico-1            1/1     Running   0          73m   10.132.0.12   kube-calico-1   <none>           <none>


kubectl get nodes --all-namespaces -o wide
NAME            STATUS   ROLES    AGE   VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION    CONTAINER-RUNTIME
kube-calico-1   Ready    master   84m   v1.13.4   10.132.0.12   <none>        Ubuntu 16.04.5 LTS   4.15.0-1023-gcp   docker://18.9.2
kube-calico-2   Ready    <none>   70m   v1.13.4   10.132.0.13   <none>        Ubuntu 16.04.6 LTS   4.15.0-1023-gcp   docker://18.9.2

You can see that K8s control plane uses initial IPs and nginx deployment pods already use Calico 192.168.0.0/16 range.

Black_Bacardi
  • 324
  • 4
  • 10
Vit
  • 7,740
  • 15
  • 40