13

I'm trying to use Kubernetes to make configurations and deployments explicitly defined and I also like Kubernetes' pod scheduling mechanisms. There are (for now) just 2 apps running on 2 replicas on 3 nodes. But Google's Kubernetes Engine's load balancer is extremely expensive for a small app like ours (at least for the moment) at the same time I'm not willing to change to a single instance hosting solution on a container or deploying the app on Docker swarm etc.

Using node's IP seemed like a hack and I thought that it might expose some security issues inside the cluster. Therefore I configured a Træfik ingress and an ingress controller to overcome Google's expensive flat rate for load balancing but turns out an outward facing ingress spins up a standart load balancer or I'm missing something.

I hope I'm missing something since at this rates ($16 a month) I cannot rationalize using kubernetes from start up for this app.

Is there a way to use GKE without using Google's load balancer?

A.Queue
  • 1,549
  • 6
  • 21
interlude
  • 843
  • 8
  • 29
  • 1
    You can see if this helps you: https://serverfault.com/questions/863569/kubernetes-can-i-avoid-using-the-gce-load-balancer-to-reduce-cost/869453#869453 – Jonathan Lin Nov 30 '18 at 03:52

4 Answers4

7

An Ingress is just a set of rules that tell the cluster how to route to your services, and a Service is another set of rules to reach and load-balance across a set of pods, based on the selector. A service can use 3 different routing types:

  • ClusterIP - this gives the service an IP that's only available inside the cluster which routes to the pods.
  • NodePort - this creates a ClusterIP, and then creates an externally reachable port on every single node in the cluster. Traffic to those ports routes to the internal service IP and then to the pods.
  • LoadBalancer - this creates a ClusterIP, then a NodePort, and then provisions a load balancer from a provider (if available like on GKE). Traffic hits the load balancer, then a port on one of the nodes, then the internal IP, then finally a pod.

These different types of services are not mutually exclusive but actually build on each other, and it explains why anything public must be using a NodePort. Think about it - how else would traffic reach your cluster? A cloud load balancer just directs requests to your nodes and points to one of the NodePort ports. If you don't want a GKE load balancer then you can already skip it and access those ports directly.

The downside is that the ports are limited between 30000-32767. If you need standard HTTP port 80/443 then you can't accomplish this with a Service and instead must specify the port directly in your Deployment. Use the hostPort setting to bind the containers directly to port 80 on the node:

containers:
  - name: yourapp
    image: yourimage
    ports:
      - name: http
        containerPort: 80
        hostPort: 80 ### this will bind to port 80 on the actual node

This might work for you and routes traffic directly to the container without any load-balancing, but if a node has problems or the app stops running on a node then it will be unavailable.

If you still want load-balancing then you can run a DaemonSet (so that it's available on every node) with Nginx (or any other proxy) exposed via hostPort and then that will route to your internal services. An easy way to run this is with the standard nginx-ingress package, but skip creating the LoadBalancer service for it and use the hostPort setting. The Helm chart can be configured for this:

https://github.com/helm/charts/tree/master/stable/nginx-ingress

Mani Gandham
  • 7,688
  • 1
  • 51
  • 60
  • Thanks Mani for the detailed info, also I believe that to use hostPort you need to enable hostNetwork: true under spec section where containers are defined. – Nitin G Sep 25 '21 at 11:10
5

One option is to completely disable this feature on your GKE cluster. When creating the cluster (on console.cloud.google.com) under Add-ons disable HTTP load balancing. If you are using gcloud you can use gcloud beta container clusters create ... --disable-addons=HttpLoadBalancing.

Alternatively, you can also inhibit the GCP Load Balancer by adding an annotation to your Ingress resources, kubernetes.io/ingress.class=somerandomstring.

For newly created ingresses, you can put this in the yaml document:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  annotations:
    kubernetes.io/ingress.class: somerandomstring
...

If you want to do that for all of your Ingresses you can use this example snippet (be careful!):

kubectl get ingress --all-namespaces \
  -o jsonpath='{range .items[*]}{"kubectl annotate ingress -n "}{.metadata.namespace}{" "}{.metadata.name}{" kubernetes.io/ingress.class=somerandomstring\n"}{end}' \
  | sh -x

Now using Ingresses is pretty useful with Kubernetes, so I suggest you check out the nginx ingress controller and after deployment, annotate your Ingresses accordingly.

Janos Lenart
  • 25,074
  • 5
  • 73
  • 75
  • Can I bind the ingress with a static ip? – interlude Apr 09 '18 at 12:31
  • This depends on the ingress controller being used. For GCP Load Balancer, yes. – Janos Lenart Apr 09 '18 at 12:37
  • 3
    I guess it was not clear on the question that by saying GKE's standard load balancer I meant GCP load balancer. I'm trying to somehow remove GCP Load balancer from the system and also bind to a static ip. – interlude Apr 09 '18 at 12:39
3

If you specify the Ingress class as an annotation on the Ingress object

kubernetes.io/ingress.class: traefik

Traefik will pick it up while the Google Load Balancer will ignore it. There is also a bit of Traefik documentation on this part.

Timo Reimann
  • 9,359
  • 2
  • 28
  • 25
  • Does this completely skip the creation of the Load Balancing rule? Essentially OP doesn't want to be charged for Cloud Load Balancing – Jonathan Lin Nov 30 '18 at 03:50
  • 1
    Yes, it should skip the entire Google LB creation process, including the rules. Essentially, how it's implemented is that there's a Google LB controller (short, glbc) running in GKE clusters that watches for Ingresses which have the right `ingress.class` annotation value. For Ingresses that do not satisfy this condition, glbc will skip them entirely and not reach out to the GCP API at all. – Timo Reimann Nov 30 '18 at 08:11
0

You could deploy the nginx ingress controller using NodePort mode (e.g. if using the helm chart set controller.service.type to NodePort) and then load-balance amongst your instances using DNS. Just make sure you have static IPs for the nodes or you could even create a DaemonSet that somehow updates your DNS with each node's IP.

Traefik seems to support a similar configuration (e.g. through serviceType in its helm chart).

eug
  • 1,118
  • 1
  • 16
  • 25
  • NodePort will give me a port which is greater than 3000 I need to bind port 80 on that static ip. – interlude Apr 09 '18 at 12:40
  • Ah ok, you can try hostPort then - in the nginx helm chart set controller.daemonset.useHostPort – eug Apr 10 '18 at 00:48