0

I have a kubernetes service:

kind: "Service"
apiVersion: "v1"
metadata:
  name: "aggregator"
  labels:
      name: "aggregator"
spec:
  ports:
    - protocol: "TCP"
      port: 8080
      targetPort: 8080
  selector:
    name: "aggregator"
createExternalLoadBalancer: true
sessionAffinity: "ClientIP"

This service worked fine when I had one node, one master, but the moment I up'd the amount of nodes, some pods in the cluster no longer connect to this service, when I curl the endpoint I receive from kubectl describe services aggregator I receive "No Route to Host".

Christian Grabowski
  • 2,782
  • 3
  • 32
  • 57

1 Answers1

0

The issue was the kube-proxy systemd service. I had:

ExecStart=/opt/bin/kube-proxy\
    --master=<MASTER_INTERNAL_IP>:8080 \
    --logtostderr=true

However, it requires https:// infront of the master's ip address. Which begs the question how does the first node work if that is still it's systemd service, and all nodes are running the same version of kubernetes?

Christian Grabowski
  • 2,782
  • 3
  • 32
  • 57