1

I have a simple k8s NodePort Service linked to a k8s deployment of a single pod hosting a hello world go program that basically uses cobra to spin up a fasthttp server. If the pod of that deployment restarts or get deleted (and a new one spins up), the whole service goes down and never comes back up. The pod reports healthy, the service reports healthy, but the load balancer is reporting no response. If I ssh onto the EC2 Node and try to call the nodeport of the service, i also get no response. Basically the entire port just dies and stops responding on the instance. Restarting the node doesn't fix it, deleting the instance and bringing up a new one doesn't fix it. I basically need to move the entire service to an entirely new port for it to start working again.

This with the k8s version of 1.24

Does anyone have any ideas why this might be the case, i've never encountered this issue hosting a container built in any other way.

Derick F
  • 2,749
  • 3
  • 20
  • 30
  • how many nodes do you have in EKS ? each time your POD is getting schedule on the same node? Generally if got being schedule on diff node we have to use that node IP and port if Node port service we are using. – Harsh Manvar Feb 22 '23 at 04:27
  • Because of this issue, it hasn't made it past our Test EKS which is just a single node. @HarshManvar – Derick F Feb 23 '23 at 08:19

0 Answers0