0

We have created a Kubernetes cluster ( 1 master, 2 workers VMs) using kubeadm on Azure. The master and worker VMs have private IPs only.

We are bringing up an nginx pod of service type - Load balancer in the cluster. Post this we are able see a Kubernetes load balancer and a public ip resource created in the resource group, which is fine.

However, now the two worker VMs automatically get the Load Balancer Public IP ( which is something we do not want)

Is this the default behaviour while deploying and LB via kubernetes on Azure ?

Dilip
  • 139
  • 2
  • Did you add some additional configuration to make `LoadBalancer` service type working on `kubeadm` on Azure? If yes, could you provide tutorial that you used? What CNI plugin are you using? Could you share yaml configuration files that are you applied? Please add missing information to your quesiton. – Mikołaj Głodziak Aug 11 '21 at 09:59
  • Any reason your building a cluster yourself rather than using AKS, which would deal with these issues for you? – Sam Cogan Aug 25 '21 at 09:28
  • @SamCogan Well, we do have and use AKS as well, one of the advantages of using kubeadm ( at least for us) is that it can be used to setup Kubernetes on for example both AWS and Azure. So same tooling for different clouds. For sure, there are pros and cons, but i feel it does give you some sort of flexibility and custom options in terms of configuration/setup – Dilip Aug 28 '21 at 13:19
  • @dilip fair enough, your just taking on a lot of management and support load. There are options to still use the cloud providers Kubernetes instance and still deploy to multiple clouds, such as Terraform and Pulumi – Sam Cogan Sep 01 '21 at 14:31

0 Answers0