0

I built a simple setup of Kubernetes on bare metal. With 1 master and 2 worker nodes:

[root@kubemaster helm-chart]$ kubectl get nodes
NAME         STATUS   ROLES           AGE   VERSION
kubemaster   Ready    control-plane   53d   v1.26.1
kubenode-1   Ready    <none>          53d   v1.26.1
kubenode-2   Ready    <none>          17d   v1.26.2

All of those nodes are running on CentOS 8 and I use the firewalld service on host machines for applying firewall rules. And there is a very strange behaviour what I see here.

First of all, I have Nginx ingress controller pods running on both worker nodes. I also have an echo service running on kubenode-2. I added an ingress which is basically routes one specific path to the echo service.

If I call the ingress controller on kubenode-2, I can always get the required answer from the echo service. But if I call the ingress controller on kubenode-1, the abnormalities are taking over. :-) Which means, I experience very strange behaviour:

  • By default, it didn't worked. There was a timeout when I called the ingress controller on kubenode-1.
  • But when I turn off the "firewalld" service on kubenode-2, I can get the required result by calling the ingress controller on kubenode-1.
  • It is not good having these machines without firewall so, I always turn it back on. And it is still working, I still get the expected behaviour by calling ingress controller on kubenode-1.
  • But not forever! After some time (it can be some hours, but usually it is 10 minutes), it stops working and the ingress controller on kubenode-1 gives me a time out. The echo service on kubenode-2 can't be reached from kubenode-1 ingress controller. This is very weird, because I don't touch anything! So, it stops working without changing a single bit of information or configuration anywhere. If I start the whole process from the beginning (stop firewall, start firewall), it comes back again, and I am able to call ingress controller on kubenode-1, and get the required response from the echo service. But not forever, after that mentioned 10 mins, it stops working again.

And if you want a little bit more twist in the story, accessing the echo service on kubenode-2 from ingress controller on kubenode-1 never works if I do systemctl restart firewalld. If I want it working again, I need to do a systemctl stop firewalld, followed by a systemctl start firewalld.

If anyone understands this, it would be much appreciated if that person can shed some light on this behaviour, or maybe give me some advice what do I need to do with the firewalld to have the echo service always available from kubenode-1.

For an additional help here is one more additional info from my kubenode-2:

[root@kubenode-2 ~]$ firewall-cmd --list-all
public (active)
  target: default
  icmp-block-inversion: no
  interfaces: eth0
  sources:
  services: cockpit dhcpv6-client http https ssh
  ports: 10250/tcp 30000-32767/tcp 6783/tcp 10255/tcp 5443/tcp 179/tcp
  protocols:
  forward: no
  masquerade: yes
  forward-ports:
  source-ports:
  icmp-blocks:
  rich rules:

Do I need to allow/open any more ports?

If you need any more info, please let me know!

bkk
  • 307
  • 5
  • 22

0 Answers0