14

On GKE, K8s Ingress are LoadBalancers provided by Compute Engine which have some cost. Example for 2 months I payed 16.97€.

In my cluster I have 3 namespaces (default, dev and prod) so to reduce cost I would like to avoid spawning 3 LoadBalancers. The question is how to configure the current one to point to the right namespace?

GKE requires the ingress's target Service to be of type NodePort, I am stuck because of that constraint.

I would like to do something like:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
 namespace: dev
 annotations: # activation certificat ssl
   kubernetes.io/ingress.global-static-ip-name: lb-ip-adress     
spec:
 hosts:
    - host: dev.domain.com 
      http:
        paths:
          - path: /*
            backend:
              serviceName: dev-service # This is the current case, 'dev-service' is a NodePort
              servicePort: http

    - host: domain.com 
      http:
        paths:
          - path: /*
            backend:
              serviceName: prod-service # This service lives in 'dev' namespace and is of type ExternalName. Its final purpose is to point to the real target service living in 'prod' namespace.
              servicePort: http

    - host: www.domain.com 
      http:
        paths:
          - path: /* 
            backend:
              serviceName: prod-service
              servicePort: http

As GKE requires service to be NodePort I am stuck with prod-service.

Any help will be appreciated.

Thanks a lot

akuma8
  • 4,160
  • 5
  • 46
  • 82
  • 2
    do you have news regarding this issue? I'm trying to find a way to solve the same problem as well – Ricardo Nov 25 '19 at 15:54
  • @Ricardo Unfortunately no. I’m looking for another cloud provider. I’m currently studying the way others (Azure, AWS, OpenShift, etc...)implement Ingress. – akuma8 Nov 25 '19 at 20:34
  • Do you think google will not suffice? :/ Can you share what did you tried? (maybe we can chat a bit on some SO chat) – Ricardo Nov 25 '19 at 21:08
  • Maybe GKE suffices but I didn’t find a suitable solution and actually I don’t have enough time to waste looking for a trick. I found GKE too much rigid and less customizable. What’s strange is that K8S is developed by google. – akuma8 Nov 25 '19 at 21:50
  • I've been looking and talking with some guys I know and creating an ingress per namespace is the best way to go. However, it's possible to create somesort of synthetic service on the default namespace pointing to the namespace specific service. And then you only have to deploy the ingress on the default. – Ricardo Nov 26 '19 at 17:30
  • Sure, having one Ingress is the best and easiest solution to adopt. But on GKE we have 2 constraints, 1st the target service should be a NodePort, that means we can’t use a ServiceName as a proxy for another service located in another namespace. 2nd the Ingress implementation is a load balancer provided by GCE which has some costs. In my case I had to pay 3x16.97€ if I chose 1 Ingress per namespace. If you have enough money you can go for that solution but in my case I can’t. – akuma8 Nov 26 '19 at 18:03
  • You can using this notation serviceName.namespace.svc.cluster.local anyway if you find a way please update here. I'm trying to implement using ingress-nginx but I will need 2 ingresses. If I could save € I would thanks very much. – Ricardo Nov 26 '19 at 18:11

3 Answers3

6

OK here is what I have been doing. I have only one ingress with one backend service to nginx.

apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: ingress
spec:
  backend:
    serviceName: nginx-svc
    servicePort: 80

And In your nginx deployment/controller you can define the config-maps with typical nginx configuration. This way you use one ingress and target mulitple namespaces.

apiVersion: v1
kind: ConfigMap
metadata:
  name: nginx-config
data:
  default.conf: |
    server {
      listen 80;
      listen [::]:80;
      server_name  _;

      location /{
        add_header Content-Type text/plain;
        return 200 "OK.";
      }

      location /segmentation {
        proxy_pass http://myservice.mynamespace.svc.cluster.local:80;
      }
    }

And the deployment will use the above config of nginx via config-map

apiVersion: extensions/v1
kind: Deployment
metadata:
  labels:
    app: nginx
  name: nginx
spec:
  replicas: 2
  template:
    metadata:
      labels:
        app: nginx
    spec:
      #podAntiAffinity will not let two nginx pods to run in a same node machine
      affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            - labelSelector:
                matchExpressions:
                  - key: app
                    operator: In
                    values:
                      - nginx
              topologyKey: kubernetes.io/hostname
      containers:
        - name: nginx
          image: nginx
          ports:
            - containerPort: 80
          volumeMounts:
            - name: nginx-configs
              mountPath: /etc/nginx/conf.d
          livenessProbe:
            httpGet:
              path: /
              port: 80
      # Load the configuration files for nginx
      volumes:
        - name: nginx-configs
          configMap:
            name: nginx-config

---

  apiVersion: v1
  kind: Service
  metadata:
    name: nginx-svc
  spec:
    selector:
      app: nginx
    type: NodePort
    ports:
      - protocol: "TCP"
        nodePort: 32111
        port: 80

This way you can take advantage of ingress features like tls/ssl termination like managed by google or cert-manager and also if you want you can also have your complex configuration inside nginx too.

Prata
  • 1,250
  • 2
  • 16
  • 31
  • 2
    Thanks, that was the solution I adopted but forgot to answer my question, I also updated my DNS config to point to the same IP address for all domains and let nginx to forward to the requested service. I still think that GKE is to cumbersome regarding Ingress management than other Kubernetes providers. – akuma8 Feb 24 '20 at 16:22
  • 1
    but this way you won't get benefits from other ingress features like container native lb https://cloud.google.com/kubernetes-engine/docs/how-to/container-native-load-balancing – humble_wolf Mar 23 '21 at 05:52
  • @humble_wolf author didn't want to have LB for each services which will obviously add to more cost... so the simple option without the LB is this answer.... if u really want a LB then u can go with e.g. istio or NGINX Ingress Controller. – Prata Mar 23 '21 at 12:40
2

Use @Prata way but with one change, do not route prod traffic via nginx but route it directly to service from loadbalancer, use nginx for non-prod traffic e.g staging etc.

The reason is that google HTTPS load balancer uses container native LB (Link) which routes traffic directly to healthy pods which saves hops and is effecient, Why not use it for production.

humble_wolf
  • 1,497
  • 19
  • 26
2

One alternative (and probably the most flexible GCP native) solution for http(s) load-balancing is the use standalone NEGs. This requires you to setup all parts of the load-balancer yourself (such as url maps, health-checks etc.)

There are multiple benefits, such as:

  1. One load-balancer can serve multiple namespaces
  2. The same load-balancer can integrate other backends as-well (like other instance groups outside your cluster)
  3. You can still use container native load balancing

One challenge of this approach is that it is not "GKE native", which means that the routes will still exist even if you delete the underlying service. This approach is therefore best maintained through tools like terraform which allows you to have GCP wholistic deployment control.

Manu
  • 21
  • 1