-1

I am learning ingress and ingress controller. So I understand that I have to do the following tasks-

  1. ingress controller deployment
  2. create service account
  3. ingress controller service nodeport expose
  4. create ingress resources to attach services.

Now my question is why we need a service account?? And what role should I attach with that service account and how do I use that service account?

Arghya Roy
  • 429
  • 3
  • 13

1 Answers1

0

What you are asking is very generic and may change a lot, depending on which is your setup (microk8s, minikube, bare-metal and so on) there are a lot of considerations to make.

The Nginx Ingress Controller installation guide for example can help you see how much things change between different environments.

It is also a good idea to simply use the installation resources provided in such guides instead of creating your own resources.. simply because the guide is more complete and ready-to-use basically.


With this said, the reason for the ServiceAccount is that the Ingress Controller Pod needs to be able to access Kubernetes API. Specifically, it needs to watch for resources such as Ingress (obviously), Services, Pods, Endpoints and more.

Imagine that the user (you) creates (or updates, or delete) a Ingress resource, the Ingress Controller needs to notice, parse it, understand what is declared and configure itself to serve the required Services at the configured domains and so on. Similarly, if something changes in the cluster, it may change how the controller needs to serve things.

For example, if you take a look at the Bare-Metal raw YAML definitions of the Nginx Ingress Controller and search for Role you will notice what it needs and also how it is attached to the other resources.


Lastly, serving the Ingress Controller from a NodePort service may not be the most resilient way to do it, it's okay for tests and such, but usually what you want is to have the Ingress Controller Pod to be served at a Load Balanced IP address, so that it is resilient to a single node of your cluster going down.

The Nginx Controller Bare-Metal considerations explains it very well.

AndD
  • 2,281
  • 1
  • 8
  • 22