0

I got a K8s cluster based on CentOS8 with the following nodes:

node1: 129.X.X.193 (master1)
node2: 129.X.X.195 (master2)
node3: 129.X.X.194 (worker)

I installed Kubernetes using Kubespray (v2.15.1) with mainly the default settings and it installed everything fine - here are the pods:

$ kubectl get pods -n kube-system
calico-kube-controllers-75f8564897-f9j48   1/1     Running   0          141m
calico-node-cx4js                          1/1     Running   0          141m
calico-node-gtcxm                          1/1     Running   0          141m
calico-node-pw9kt                          1/1     Running   0          141m
coredns-7677f9bb54-qx5w8                   1/1     Running   0          140m
coredns-7677f9bb54-ttjkm                   1/1     Running   0          140m
dns-autoscaler-6b849f6697-85v55            1/1     Running   0          140m
kube-apiserver-node1                       1/1     Running   0          144m
kube-apiserver-node2                       1/1     Running   0          143m
kube-controller-manager-node1              1/1     Running   0          144m
kube-controller-manager-node2              1/1     Running   0          143m
kube-proxy-7mjdk                           1/1     Running   0          142m
kube-proxy-dr2x2                           1/1     Running   0          142m
kube-proxy-qrpx9                           1/1     Running   0          142m
kube-scheduler-node1                       1/1     Running   0          144m
kube-scheduler-node2                       1/1     Running   0          143m
nginx-proxy-node3                          1/1     Running   0          142m
nodelocaldns-hxjmt                         1/1     Running   0          140m
nodelocaldns-jctwx                         1/1     Running   0          140m
nodelocaldns-khdsl                         1/1     Running   0          140m

Now I want to check the network plugin using a sample nginx deployment:

$ kubectl create deployment nginx --image=nginx
deployment.apps/nginx created
$ kubectl expose deployment/nginx --type="NodePort" --port=8080 --target-port=80 --overrides '{ "spec":{"ports":[{"port":8080,"targetPort":80,"protocol":"TCP","nodePort":31704}]}}'
service/nginx exposed

Pod as well as service looks fine:

$ kubectl get svc -n kube-system
NAME      TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)                  AGE
coredns   ClusterIP   10.X.X.3     <none>        53/UDP,53/TCP,9153/TCP   142m
nginx     NodePort    10.X.X.22   <none>        8080:31704/TCP           29m

From my understanding I should now be able to access that service from all 3 nodes, however only 2 of 3 nodes are working:

Node1 & Node3 (hostnames as well as IPs) are showing everything as expected:

$ curl node1:31704
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

However when I use node2 it is stuck:

$ curl -v node2:31704
* Rebuilt URL to: node2:31704/
*   Trying 129.X.X.195...
* TCP_NODELAY set

Using the internal ClusterIP also works fine (curl 10.X.X.22:8080)

Any idea what I'm doing wrong or any further information needed? Thanks!

Mikołaj Głodziak
  • 4,775
  • 7
  • 28
MaGi
  • 171
  • 1
  • 1
  • 10

0 Answers0