0

I setup to stateful pods running on 2 different worker node and I am unable to ping the pods. Following is the pool file:

apiVersion: projectcalico.org/v3
kind: IPPool
metadata:
  name: rack.ippool-1
spec:
  cidr: 192.168.16.0/24 
  blockSize: 24  
  ipipMode: Never
  natOutgoing: true
  disabled: false
  nodeSelector: all()

IP config on 1st pod

ip addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 3e:a6:cb:15:cf:1a brd ff:ff:ff:ff:ff:ff
    inet 192.168.16.41/32 brd 192.168.16.41 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::3ca6:cbff:fe15:cf1a/64 scope link 
       valid_lft forever preferred_lft forever

IP Conf on another node

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 1a:3c:c1:1a:fa:03 brd ff:ff:ff:ff:ff:ff
    inet 192.168.16.42/32 brd 192.168.16.42 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::183c:c1ff:fe1a:fa03/64 scope link 
       valid_lft forever preferred_lft forever

Ping Status

ping 192.168.16.41
PING 192.168.16.41 (192.168.16.41) 56(84) bytes of data.

It doesn't work.

I tried ipipMode: Always and CrossSubnet but nothing worked. I am not sure what I am missing. Also, I am not sure when I gave blocksize 24, why the IPs are in /32 CIDR. Aren't they in range of /24 CIDR?

[root@k8master-1 ~]# calicoctl node status
Calico process is running.

None of the BGP backend processes (BIRD or GoBGP) are running.

Calico IPam result

calicoctl ipam show
+----------+-----------------+-----------+------------+--------------+
| GROUPING |      CIDR       | IPS TOTAL | IPS IN USE |   IPS FREE   |
+----------+-----------------+-----------+------------+--------------+
| IP Pool  | 10.244.0.0/16   |     65536 | 1 (0%)     | 65535 (100%) |
| IP Pool  | 192.168.16.0/24 |       256 | 3 (1%)     | 253 (99%)    |
+----------+-----------------+-----------+------------+--------------+

Calico ipam block

[root@k8master-1 ~]# calicoctl ipam show --show-blocks
+----------+-----------------+-----------+------------+--------------+
| GROUPING |      CIDR       | IPS TOTAL | IPS IN USE |   IPS FREE   |
+----------+-----------------+-----------+------------+--------------+
| IP Pool  | 10.244.0.0/16   |     65536 | 1 (0%)     | 65535 (100%) |
| Block    | 10.244.0.0/26   |        64 | 1 (2%)     | 63 (98%)     |
| IP Pool  | 192.168.16.0/24 |       256 | 3 (1%)     | 253 (99%)    |
| Block    | 192.168.16.0/24 |       256 | 3 (1%)     | 253 (99%)    |
+----------+-----------------+-----------+------------+--------------+

Calico's borrowed IP list

[root@k8master-1 ~]# calicoctl ipam show --show-borrowed
+---------------+----------------+-----------------+-------------+------+--------------------+
|      IP       | BORROWING-NODE |      BLOCK      | BLOCK OWNER | TYPE |    ALLOCATED-TO    |
+---------------+----------------+-----------------+-------------+------+--------------------+
| 192.168.16.39 | k8worker-2     | 192.168.16.0/24 |             | pod  | default/racnode1-0 |
| 192.168.16.41 | k8worker-2     | 192.168.16.0/24 |             | pod  | default/racnode1-0 |
| 192.168.16.42 | k8worker-1     | 192.168.16.0/24 |             | pod  | default/racnode2-0 |
+---------------+----------------+-----------------+-------------+------+--------------------+
Rico
  • 58,485
  • 12
  • 111
  • 141
drifter
  • 389
  • 1
  • 5
  • 17
  • Where are you trying to ping from? Your machine, another pod, master node, etc? – Brian Pursley Jul 12 '20 at 23:54
  • Hi, I am trying to ping pod to pod at this moment. However, Would be great to know if we can ping the pod from the subnet of the master node and worker node cidr which is 10.0.1.0/24. – drifter Jul 13 '20 at 02:47
  • No, you can't ping from a node to a pod, that's why there are k8s services – Rico Jul 13 '20 at 05:22
  • What's the IP address range for your K8s nodes? – Rico Jul 13 '20 at 05:26
  • @Rico, thanks for providing input. As I mentioned I am looking for a solution to ping pod to pod when both the pods are on the same subnet. In my case both the pods are created using the calico network but on the different worker nodes. The pod subnet is 192.168.16.0/24. – drifter Jul 13 '20 at 18:34
  • Any help on this? – drifter Jul 15 '20 at 04:28

1 Answers1

-1

I also having similar issue. I created two pods nginx and busybox in same namespace, but I could not ping nginx pod from busybox with calico network plugin. If I expose as node port, I could connect to the pod from the node where the nginx pod is running. If you try to hit the node port from other k8 cluster node, it is not working. I am still trying to find out why but no clue as of now.