23

I would like to expose my Kubernetes Managed Digital Ocean (single node) cluster's service on port 80 without the use of Digital Ocean's load balancer. Is this possible? How would I do this?

This is essentially a hobby project (I am beginning with Kubernetes) and just want to keep the cost very low.

Joseph Horsch
  • 534
  • 4
  • 16
  • Why can't you use minikube if you are cost sensitive https://kubernetes.io/docs/tasks/tools/install-minikube/ – Steephen Jan 10 '19 at 04:22
  • 1
    Because the app still needs to be publicly accessible, think something like a personal website, or a web app for portfolio. The link you included says minikube will only allow you to run things "in a virtual machine on your personal computer", which is not enough for achieving such goal. – Shawn Jul 07 '19 at 05:14

2 Answers2

37

You can deploy an Ingress configured to use the host network and port 80/443.

  1. DO's firewall for your cluster doesn't have 80/443 inbound open by default.

    If you edit the auto-created firewall the rules will eventually reset themselves. The solution is to create a separate firewall also pointing at the same Kubernetes worker nodes:

$ doctl compute firewall create \
--inbound-rules="protocol:tcp,ports:80,address:0.0.0.0/0,address:::/0 protocol:tcp,ports:443,address:0.0.0.0/0,address:::/0" \
--tag-names=k8s:CLUSTER_UUID \
--name=k8s-extra-mycluster

(Get the CLUSTER_UUID value from the dashboard or the ID column from doctl kubernetes cluster list)

  1. Create the nginx ingress using the host network. I've included the helm chart config below, but you could do it via the direct install process too.

EDIT: The Helm chart in the above link has been DEPRECATED, Therefore the correct way of installing the chart would be(as per the new docs) is :

helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update

After this repo is added & updated

# For Helm 2
$ helm install stable/nginx-ingress --name=myingress -f myingress.values.yml

# For Helm 3
$ helm install myingress stable/nginx-ingress -f myingress.values.yml

#EDIT: The New way to install in helm 3
helm install myingress ingress-nginx/ingress-nginx -f myingress.values.yaml

myingress.values.yml for the chart:

---
controller:
  kind: DaemonSet
  hostNetwork: true
  dnsPolicy: ClusterFirstWithHostNet
  daemonset:
    useHostPort: true
  service:
    type: ClusterIP
rbac:
  create: true
  1. you should be able to access the cluster on :80 and :443 via any worker node IP and it'll route traffic to your ingress.

  2. since node IPs can & do change, look at deploying external-dns to manage DNS entries to point to your worker nodes. Again, using the helm chart and assuming your DNS domain is hosted by DigitalOcean (though any supported DNS provider will work):

# For Helm 2
$ helm install --name=mydns -f mydns.values.yml stable/external-dns

# For Helm 3
$ helm install mydns stable/external-dns -f mydns.values.yml

mydns.values.yml for the chart:

---
provider: digitalocean
digitalocean:
  # create the API token at https://cloud.digitalocean.com/account/api/tokens
  # needs read + write
  apiToken: "DIGITALOCEAN_API_TOKEN"
domainFilters:
  # domains you want external-dns to be able to edit
  - example.com
rbac:
  create: true
  1. create a Kubernetes Ingress resource to route requests to an existing Kubernetes service:
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: testing123-ingress
  annotations:
    kubernetes.io/ingress.class: nginx
spec:
  rules:
    - host: testing123.example.com             # the domain you want associated
      http:
        paths:
          - path: /
            backend:
              serviceName: testing123-service  # existing service
              servicePort: 8000                # existing service port
  1. after a minute or so you should see the DNS records appear and be resolvable:
$ dig testing123.example.com             # should return worker IP address
$ curl -v http://testing123.example.com  # should send the request through the Ingress to your backend service

(Edit: editing the automatically created firewall rules eventually breaks, add a separate firewall instead).

cosmicsage
  • 155
  • 2
  • 7
rcoup
  • 5,372
  • 2
  • 32
  • 36
  • 2
    Thank you so much for this! Spent more than 3 days before finding this great answer. – stasdeep May 11 '19 at 15:25
  • 1
    `helm install myingress stable/nginx-ingress -f myingress.values.yml` for Helm 3 – hkarask Apr 10 '20 at 23:24
  • @ZitRo what happens when your DO K8s nodes get automatically-replaced? (eg: scaling/upgrades/failures/etc). Or are you referring to running k8s yourself on "bare" VMs? – rcoup Nov 25 '20 at 16:15
  • @rcoup my suggestion regarding using floating IPs with DO k8s (as of Dec 1 2020) turned to be wrong. Here's the answer from DO support: the scenario where the floating IP's will not work is when the nodes are recycled by you/DO or when the cluster is upgraded. There is no relation between the node and the node that replaces it, so the floating IP you assign simply believes the node it was assigned to has been deleted and is then detached. The lack of this relation between a node and its replacement is the reason we do not recommend floating IPs on DOKS nodes as they only temporarily "work". – ZitRo Dec 01 '20 at 14:08
  • 1
    Would it be possible to provide these instructions without using another dependency like Helm? – marked-down Apr 29 '21 at 23:30
  • 2
    I found that this configuration led to cluster IP addresses in DNS, rather than the routeable public IPs of the nodes. I've worked around that by making a change to `myingress.values.yml`: set `controller.publishService.enabled` to `false`. Not certain that is the 'right' thing to do but it fixed the DNS. – 46bit Aug 28 '21 at 00:16
  • 1
    This answer helped me a lot, but external-dns didn't work for me with Cloudflare because it picked up the internal CluserIP address from the nginx ingress, not the node's external IP. The solution I used was to use https://github.com/calebdoxsey/kubernetes-cloudflare-sync instead to directly sync the external IP of the nodes to something like `k8s.mydomain`. Then I simply add a CNAME to this address whenever I add a new nginx ingress. – mcartmell Aug 29 '21 at 10:51
  • 1
    Unfortunately, this no longer seems to be a working solution. The POD seems to be crashing continuously with the error log "port 443 is already in use. Please check the flag --https-port". Looking into this error, I have the impression pod/daemonset cpc-bridge-proxy is already using the secure port. – Frederik Nov 06 '21 at 19:57
  • Super life saver! So far got it running for http. My key takeaway was that I needed to deploy the nginx-ingress-controller in the daemonset with the host network port. This approach is hinted here: https://kubernetes.github.io/ingress-nginx/deploy/baremetal/#via-the-host-network but without guideance on how to set it up with helm, it was not very explicit. Thank you so much :D – Patricio Marrone May 08 '22 at 14:34
1

A NodePort Service can do what you want. Something like this:

kind: Service
apiVersion: v1
metadata:
  name: my-service
spec:
  type: NodePort
  selector:
    app: MyApp
  ports:
  - protocol: TCP
    nodePort: 80
    targetPort: 80

This will redirect incoming traffic from port 80 of the node to port 80 of your pod. Publish the node IP in DNS and you're set.

In general exposing a service to the outside world like this is a very, very bad idea, because the single node passing through all traffic to the service is both going to receive unbalanced load and be a single point of failure. That consideration doesn't apply to a single-node cluster, though, so with the caveat that LoadBalancer and Ingress are the fault-tolerant ways to do what you're looking for, NodePort is best for this extremely specific case.

Steve McKay
  • 2,123
  • 17
  • 26
  • 1
    Thank you for the response and example! I read that nodePort was restricted to non-standard ports (30000-32767). Is that true in this context? – Joseph Horsch Jan 15 '19 at 15:11
  • 3
    You are correct, and I read the documentation incorrectly. The solution I suggested will not work. – Steve McKay Jan 15 '19 at 19:21
  • Is the following the only way to expose `ingress controller service` on port `80` to the external users without `LoadBalancer`? When I use `NodePort`, I would have to bind 80 with NodePort(between 30000~32767) by using `Nginx` or `Apache`. – Jinsu Mar 25 '20 at 12:27