2

I hav a StatefulSet of 3 replicas of ejabberd. I have exposed them to GCP NEG using the following declaration:

apiVersion: v1
kind: Service
metadata:
name: ejabberd
annotations:
  cloud.google.com/neg: '{"exposed_ports": {"5222":{"name": "ejabberd-xmpp-production-neg"}, "5443":{"name": "ejabberd-http-production-neg"}}}'
spec:
  type: ClusterIP
  clusterIP: None
  selector:
    app: ejabberd
  ports:
    - protocol: TCP
      name: xmpp
      port: 5222
      targetPort: 5222
    - protocol: TCP
      name: http
      port: 5443
      targetPort: 5443

The health check is an HTTP endpoint at port 5443 that return HTTP 200 (OK) status code at path /.

The issue is that when i create a SSL Proxy the health check always fail but when i ssh into the pods and execute $ curl localhost:5443/ i get a success response. TCP health checks didn't work either.

Karim H
  • 1,543
  • 10
  • 24

1 Answers1

3

The issue was caused by the Cloud Load Balancer not being able to access the ports. I fixed it by creating a firewall rule that allow the load balancer to access the specified port.

This achieved by running the following command:

gcloud compute firewall-rules create fw-allow-health-check-and-proxy \
  --network=NETWORK_NAME \
  --action=allow \
  --direction=ingress \
  --target-tags=GKE_NODE_NETWORK_TAGS \
  --source-ranges=130.211.0.0/22,35.191.0.0/16 \
  --rules=tcp:5443,tcp:5222

More can be found here Attaching an external HTTP(S) load balancer to standalone NEGs

Pit
  • 736
  • 3
  • 17
Karim H
  • 1,543
  • 10
  • 24