0

I am trying to expose services to the world outside the rancher clusters.

Api1.mydomain.com, api2.mydomain.com, and so on should be accessible.

Inside rancher we have several clusters. I try to use one cluster specifically. It's spanning 3 nodes node1cluster1, node2cluster1 and node2cluster1.

I have added ingress inside the rancher cluster, to forward service requests for api1.mydomain.com to a specific workload.

On our DNS I entered the api1.mydomain.com to be forwarded, but it didn't work yet.

Which IP URL should I use to enter in the DNS? Should it be rancher.mydomain.com, where the web gui of rancher runs? Should it be a single node of the cluster that had the ingress (Node1cluster1)?

Both these options seem not ideal. What is the correct way to do this?

I am looking for a solution that exposes a full url to the outside world. (Exposing ports is not an option as the companies dns cant forward to them.)

SwissCoder
  • 2,514
  • 4
  • 28
  • 40

1 Answers1

1

Simple answer based on the inputs provided: Create a DNS entry with the IP address of Node1cluster1.

I am not sure how you had installed the ingress controller, but by default, it's deployed as "DaemonSet". So you can either use any one of the IP addresses of the cluster nodes or all the IP addresses of the cluster nodes. (Don't expect DNS to load balance though).

The other option is to have a load balancer in front with all the node IP addresses configured to actually distribute the traffic.

Another strategy that I have seen is to have a handful of nodes dedicated to run Ingress by use of taints/tolerations and not use them for scheduling regular workloads.

leodotcloud
  • 1,830
  • 14
  • 15
  • Thanks a lot! So with 'daemon set' you mean it behaves similar to exposing a registered port. It will work on all nodes and redirect? But if node1cluster1 is removed, it will really still work? – SwissCoder Apr 15 '19 at 05:41
  • DaemonSet is to make sure there is a Pod running on every node of the cluster. If you look at the DaemonSet spec, you can see it's using a HostPort of 80 & 443 (coz the default NodePort range doesn't work with these ports). If you have a LB in front of the nodes, it would be monitoring the health so if node1cluster1 is down, the LB will detect and remove it in the target group. But if you are using a simple DNS, then NO. There will be a downtime. – leodotcloud Apr 15 '19 at 17:35
  • The thing is, if I have 1000 services, pointing to the one IP registered in the load balancer, for forwarding to the cluster nodes . Can the loadbalancer really keep track which services are actually down? – SwissCoder Apr 16 '19 at 11:01
  • 1
    If the LB is running in L4 (TCP) mode then it doesn't really care. It's just forwarding. It's up to the cluster nodes running the ingress controller to figure out which service the traffic is destined to and forward it to the right Pod. So yes, the ingress controller can handle 1000 services with different hostnames. That's what it's designed for. Also, an LB can also differentiate between those 1000 services if it's running in L7 (App) mode. – leodotcloud Apr 16 '19 at 20:28
  • Thanks for taking the time! I know the questions in here are already past the original one :) Really appreciate it Sir. We actually have an external HAProxy instance, I guess it does only L4 loadbalancing and therefor would only track the health of a cluster node entirely. – SwissCoder Apr 17 '19 at 10:52
  • HAProxy can work in both modes. You have to check the configuration. – leodotcloud Apr 17 '19 at 17:51