33

In my gcloud console it shows the following error for my defined ingresses:

Error during sync: error while evaluating the ingress spec: service "monitoring/kube-prometheus" is type "ClusterIP", expected "NodePort" or "LoadBalancer"

I am using traefik as reverse proxy (instead of nginx) and therefore I define an ingress using a ClusterIP. As far as I understand the process all traffic is proxied through the traefik service (which has a Loadbalancer ingress defined) and therefore all my other ingresses SHOULD actually have a ClusterIP instead of NodePort or Loadbalancer?

Question:

So why does Google Cloud warn me that it expected a NodePort or LoadBalancer?

enter image description here

kentor
  • 16,553
  • 20
  • 86
  • 144

4 Answers4

18

I don't know why that error happens, because it seems (to me) to be a valid configuration. But to clear the error, you can switch your service to a named NodePort. Then switch your ingress to use the port name instead of the number. For example:

Service:

apiVersion: v1
kind: Service
metadata:
  name: testapp
spec:
  ports:
  - name: testapp-http # ADD THIS
    port: 80
    protocol: TCP
    targetPort: 80
  selector:
    app: testapp
  type: NodePort

Ingress:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: testapp
spec:
  rules:
  - host: hostname.goes.here
    http:
      paths:
      - backend:
          serviceName: testapp
          # USE THE PORT NAME FROM THE SERVICE INSTEAD OF THE PORT NUMBER
          servicePort: testapp-http
        path: /

Update:

This is the explanation I received from Google.

Since services by default are ClusterIP [1] and this type of service is meant to be accessible from inside the cluster. It can be accessed from outside when kube-proxy is used, not meant to be directly accessed with an ingress.

As a suggestion, I personally find this article [2] good for understanding the difference between these types of services.

[1] https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types

[2] https://medium.com/google-cloud/kubernetes-nodeport-vs-loadbalancer-vs-ingress-when-should-i-use-what-922f010849e0

aayore
  • 663
  • 5
  • 12
  • 4
    Well actually this service shouldn't be directly accessible by the public. It should only be available via the proxy (which it is when I use the ClusterIP). Using NodePort it would be directly accessible from the public which I don't want. Am I missunderstanding something? – kentor Aug 12 '18 at 15:47
  • That will depend on your network setup. My VPC's are all private. (Which I think is the default.) The only way to get traffic into my cluster is via a load balancer. – aayore Aug 22 '18 at 20:31
  • 6
    Side note: I'm using the nginx ingress controller. I was having an issue where it was racing the GCP ingress controller and stuff was flipping out. You can either disable the HttpLoadBalancing addon for GKE or be sure to specify the `kubernetes.io/ingress.class` annotation. https://kubernetes.github.io/ingress-nginx/user-guide/multiple-ingress/#important – aayore Aug 22 '18 at 20:36
18

Thanks @aayore. In my case, I had to specify an ingress class explicitly, so that Google Cloud wouldn't interfere. The Nginx ingress seems to be happy with ClusterIp services.

metadata:
  name: foo
  annotations:
    kubernetes.io/ingress.class: "nginx"
petrus-jvrensburg
  • 1,353
  • 15
  • 19
  • related question. still have an issue. GCP. ingress: https://stackoverflow.com/questions/60923601/ingress-cluster-ip-back-end-got-err-connection-refused?noredirect=1#comment107814412_60923601 – ses Mar 31 '20 at 01:16
12

For us the solution was to set the annotation cloud.google.com/neg: '{"ingress": true}' on the services to be exposed.

Usually this is set automatically to all services under the following conditions:

  • services created with at least 1.17.6-gke.7
  • VPC-native clusters
  • Not using a Shared VPC
  • Not using GKE Network Policy

As we started to introduce network policies our exposed services stopped working.

So the example above should also work with ClusterIPs:

apiVersion: v1
kind: Service
metadata:
  name: testapp
  annotations:
    cloud.google.com/neg: '{"ingress": true}'
spec:
  ports: ...
  type: ClusterIP
user2355282
  • 727
  • 8
  • 9
0

We were having this issue on our production environment, but not on staging, and it was being caused by a mismatch in cluster versions. It turns out that ClusterIP is only a valid ServiceType on GKE if you're using Container Native Load Balancing, which is enabled by default on GKE clusters with version 1.17.6-gke.7 and up. We fixed it by simply upgrading our production cluster to the latest stable version.

Adarah
  • 1
  • 2