1

The knative docs describe the following:

To configure DNS for Knative, take the External IP or CNAME from setting up networking, and configure it with your DNS provider as follows

  • If the networking layer produced an External IP address, then configure a wildcard A record for the domain:

    # Here knative.example.com is the domain suffix for your cluster

    *.knative.example.com == A 35.233.41.212

  • If the networking layer produced a CNAME, then configure a CNAME record for the domain:

    # Here knative.example.com is the domain suffix for your cluster

    *.knative.example.com == CNAME a317a278525d111e89f272a164fd35fb-1510370581.eu-central-1.elb.amazonaws.com

However, my environment doesn't have an external load balancer and hence no EXTERNAL-IP:

$ kubectl --namespace istio-system get service istio-ingressgateway
NAME                   TYPE       CLUSTER-IP       EXTERNAL-IP   PORT(S)                                                      AGE
istio-ingressgateway   NodePort   10.110.132.172   <none>        15021:31278/TCP,80:32725/TCP,443:30557/TCP,15443:32309/TCP   8h

I do have an istio-ingresgateway configured:

$ kubectl get po -l istio=ingressgateway -n istio-system \
     -o jsonpath='{.items[*].status.hostIP}'
10.1.0.193 10.1.0.132 10.1.0.174

Can I simply set up DNS as follows?

*.knative.example.com     [some TTL]   IN   A    10.1.0.193
*.knative.example.com     [some TTL]   IN   A    10.1.0.132
*.knative.example.com     [some TTL]   IN   A    10.1.0.174
Chris Snow
  • 23,813
  • 35
  • 144
  • 309

2 Answers2

1

Setting up DNS as follows works ok so far for me:

*.knative.example.com     [some TTL]   IN   A    10.1.0.193
*.knative.example.com     [some TTL]   IN   A    10.1.0.132
*.knative.example.com     [some TTL]   IN   A    10.1.0.174
Chris Snow
  • 23,813
  • 35
  • 144
  • 309
1

It looks like you're using hostPort networking here; if that's the case, then Kubernetes will map port 80 and 443 of the host's IP address to Istio's envoy pods. This week work as long as your istio-ingressgateway pods remain scheduled on the same machines (for example, if you use a daemonset and have 3 nodes in the cluster, put all three node IPs in DNS). Here are a few places where this will break down and a Kubernetes LoadBalancer service will work better:

  • If one of the hosts fails, 1/N clients will try the bad IP address and may error out or have a long timeout and then retry. You'll need to take the host out of DNS to make this work.
  • If you have more hosts than istio-ingressgateway pods, every time one of those pods is rescheduled (Deployment update, host kernel upgrade, etc), you'll have an outage like the above. Using a DaemonSet can avoid this if have less than 5-7 hosts, but larger sets of records may cause other problems.
  • You can't use a HorizontalPodAutoscaler (HPA) to rightsize the number of istio-ingressgateway pods (see above for mismatch consequences)

If possible, you might look at MetalLB as a lie-cost software load balancer of any of the above concern you. If none of the above are major concerns, nodePort services are a bit simpler than LoadBalancer ones.

E. Anderson
  • 3,405
  • 1
  • 16
  • 19