1

I setup a k8s cluster using kubeadm init on a bare metal cluster.

I noticed the kube-apiserver is exposing its interface on a private IP:

# kubectl get pods kube-apiserver-cluster1 -n kube-system -o wide
NAME                                        READY   STATUS    RESTARTS   AGE     IP           NODE                         NOMINATED NODE   READINESS GATES
kube-apiserver-cluster1                     1/1     Running   0          6d22h   10.11.1.99   cluster1   <none>           <none>

Here's the kube config inside the cluster:

# kubectl config view
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: DATA+OMITTED
    server: https://10.11.1.99:6443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: kubernetes-admin
  name: kubernetes-admin@kubernetes
current-context: kubernetes-admin@kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-admin
  user:
    client-certificate-data: REDACTED
    client-key-data: REDACTED

This is fine for using kubectl locally on the cluster, but I want to add an additional interface to expose the kube-apiserver using the public IP address. Ultimately I'm trying to configure kubectl from a laptop to remotely access the cluster.

How can I expose the kube-apiserver on an external IP address?

jersey bean
  • 3,321
  • 4
  • 28
  • 43

1 Answers1

3

Execute following command:

$ kubeadm init --pod-network-cidr=<ip-range> --apiserver-advertise-address=0.0.0.0 --apiserver-cert-extra-sans=<PRIVATE_IP>[,<PUBLIC_IP>,...]

Don't forget to replace the private IP for the public IP in your .kube/config if you use kubectl from remote.

You can also forward the private IP of the master node to the public IP of the master node on the worker node. Run this command on worker node before running kubeadm join:
$ sudo iptables -t nat -A OUTPUT -d <Private IP of master node> -j DNAT --to-destination <Public IP of master node>. But keep in mind that you'll also have to forward worker private IPs the same way on the master node to make everything work correctly (if they suffer from the same issue of being covered by cloud provider NAT).

See more: apiserver-ip, kube-apiserver.

Malgorzata
  • 6,409
  • 1
  • 10
  • 27
  • I think what your saying here is my options are as follows: A) use that kudeadm init cmd if your first setting up k8s, or B) use iptables to forward packets if you've already setup your k8s cluster. Right? In my situation I already have 1 master and 2 worker nodes, and didn't want to destroy my k8s cluster and start over. So in that case I use iptables to forward packets from public to private IP. right? – jersey bean Sep 10 '20 at 18:23
  • Okay, I found this command worked for me: `iptables -i eth1 -o eth0 -p tcp --dport 6443 -A FORWARD` Note: eth1 is the interface for my public IP, and eth0 is the interface for my private IP. I'm using port 6443 as this is the port used by the `kube-apiserver` – jersey bean Sep 10 '20 at 19:55
  • thx for your answer. See my comments above. I've accepted your answer because helped me understand the approaches I could take to solve this issue. – jersey bean Sep 10 '20 at 19:57
  • Now I'm really confused. :) Next I needed to `kubectl config set-credentials` to allow remote access with `kubectl` access from my laptop. I basically just copied over the credentials from ~/.kube/config on my cluster to my macbook. I was able to use `kubectl` to access remotely. However, just to experiment, I deleted the iptable FORWARD rule. And it still worked!?!? – jersey bean Sep 10 '20 at 20:02
  • Maybe k8s already set this up for me: 3969K 2176M KUBE-FORWARD all -- * * 0.0.0.0/0 0.0.0.0/0 /* kubernetes forwarding rules */ – jersey bean Sep 10 '20 at 20:07
  • Can you please paste results from executing command: sudo iptables -L ? – Malgorzata Sep 18 '20 at 08:02
  • Sorry I don't have the cmd output anymore – jersey bean Sep 21 '20 at 16:24