0

I try to expose service in k8s.

When I launch kubectl expose command kubectl expose deployment demo-api-app -n default --type=NodePort --name=my-service everything is OK. I have endpoints and a port to nodeport:

[root@mys-servver test_connect]# kubectl get endpoints my-service
NAME         ENDPOINTS          AGE
my-service   10.233.93.1:8081   4m31s
root@ppfxasmsnd1005 test_connect]# kubectl get  svc my-service
NAME         TYPE       CLUSTER-IP     EXTERNAL-IP   PORT(S)          AGE
my-service   NodePort   10.233.11.26   <none>        8081:31594/TCP   5m12s
[root@ppfxasmsnd1005 test_connect]#

Nodeport have to listen to 31594 but noting is listen ( I try the following comand in all worker/control plane)

[root@ys-servver test_connect]# netstat -tulpn | grep  31594
[root@ys-servver test_connect]#

I have nothing in kubeproxy logs. ( I try to increase log but doesn't work )

[root@ys-servver test_connect]# kubectl logs -n kube-system -l "k8s-app=kube-proxy"
I0216 08:15:40.569350       1 conntrack.go:52] "Setting nf_conntrack_max" nf_conntrack_max=524288
I0216 08:15:40.569546       1 config.go:317] "Starting service config controller"
I0216 08:15:40.569556       1 shared_informer.go:255] Waiting for caches to sync for service config
I0216 08:15:40.569599       1 config.go:226] "Starting endpoint slice config controller"
I0216 08:15:40.569614       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
I0216 08:15:40.569614       1 config.go:444] "Starting node config controller"
I0216 08:15:40.569623       1 shared_informer.go:255] Waiting for caches to sync for node config
I0216 08:15:40.670408       1 shared_informer.go:262] Caches are synced for service config
I0216 08:15:40.670446       1 shared_informer.go:262] Caches are synced for endpoint slice config
I0216 08:15:40.670476       1 shared_informer.go:262] Caches are synced for node config
I0216 08:15:40.721944       1 conntrack.go:52] "Setting nf_conntrack_max" nf_conntrack_max=131072
I0216 08:15:40.722143       1 config.go:226] "Starting endpoint slice config controller"
...

I deploy with kubespray and I use iptables to kubeproxy system: Redhat8 selinus disabled on all nodes What's wrong

Ps:


[root@servver test_connect]# kubectl version
Client Version: version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.0", GitCommit:"ab69524f795c42094a6630298ff53f3c3ebab7f4", GitTreeState:"clean", BuildDate:"2021-12-07T18:16:20Z", GoVersion:"go1.17.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"25", GitVersion:"v1.25.5", GitCommit:"804d6167111f6858541cef440ccc53887fbbc96a", GitTreeState:"clean", BuildDate:"2022-12-08T10:08:09Z", GoVersion:"go1.19.4", Compiler:"gc", Platform:"linux/amd64"}
WARNING: version difference between client (1.23) and server (1.25) exceeds the supported minor version skew of +/-1
[root@servver test_connect]# kubectl cluster-info
Kubernetes control plane is running at https://127.0.0.1:6443

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
[root@servver test_connect]#


[root@servver  test_connect]# sysctl -p
net.ipv4.ip_forward = 1
kernel.keys.root_maxbytes = 25000000
kernel.keys.root_maxkeys = 1000000
kernel.panic = 10
kernel.panic_on_oops = 1
vm.overcommit_memory = 1
vm.panic_on_oom = 0
net.ipv4.ip_local_reserved_ports = 30000-32767
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-arptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
[root@servver  test_connect]#

I must nodeport open to all node

Laurent
  • 1
  • 2
  • I solve my issue: You must check in iptables to see if port is correct: -> `iptables-saves |grep ` If you have same issue don't forget to disable sellinux, make sure sysctl conf is correct. If you are in IPVS mode try to change to iptables its work for me – Laurent Feb 16 '23 at 09:13

0 Answers0