2

I have configured a Kubernetes cluster using kubeadm, by creating 3 Virtualbox nodes, each node running CentOS (master, node1, node2). Each virtualbox virtual machine is configured using 'Bridge' networking. As a result, I have the following setup:

  1. Master node 'master.k8s' running at 192.168.19.87 (virtualbox)
  2. Worker node 1 'node1.k8s' running at 192.168.19.88 (virtualbox)
  3. Worker node 2 'node2.k8s' running at 192.168.19.89 (virtualbox

Now I would like to access services running in the cluster from my local machine (the physical machine where the virtualbox nodes are running).

Running kubectl cluster-info I see the following output:

Kubernetes master is running at https://192.168.19.87:6443
KubeDNS is running at ...

As an example, let's say I deploy the dashboard inside my cluster, how do I open the dashboard UI using a browser running on my physical machine?

rilla
  • 782
  • 6
  • 18
Salvatore
  • 1,145
  • 3
  • 21
  • 42
  • Are you able to ssh into any of the nodes from your host machine? I think the setup would be relatively the same for accessing the cluster, so you may want to look into it. I think you'd need to ensure port 6443 is forwarded to the host machine so you can access it. – Grant David Bachman Mar 27 '18 at 13:27

2 Answers2

2

The traditional way is to use kubectl proxy or a Load Balancer, but since you are in a development machine a NodePort can be used to publish the applications, as a Load balancer is not available in VirtualBox.

The following example deploys 3 replicas of an echo server running nginx and publishes the http port using a NodePort:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  selector:
    matchLabels:
      app: nginx
  replicas: 3
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
        - name: my-echo
          image: gcr.io/google_containers/echoserver:1.8          
---

apiVersion: v1
kind: Service
metadata:
  name: nginx-service-np
  labels:
    name: nginx-service-np
spec:
  type: NodePort
  ports:
    - port: 8082        # Cluster IP http://10.109.199.234:8082
      targetPort: 8080  # Application port
      nodePort: 30000   # Example (EXTERNAL-IP VirtualBox IPs) http://192.168.50.11:30000/ http://192.168.50.12:30000/ http://192.168.50.13:30000/
      protocol: TCP
      name: http
  selector:
    app: nginx

You can access the servers using any of the VirtualBox IPs, like http://192.168.50.11:30000 or http://192.168.50.12:30000 or http://192.168.50.13:30000

See a full example at Building a Kubernetes Cluster with Vagrant and Ansible (without Minikube).

Javier Ruiz
  • 101
  • 5
0

The traditional way of getting access to the kubernetes dashboard is documented in their readme and is to use kubectl proxy.

One should not have to ssh into the cluster to access any kubernetes service, since that would defeat the purpose of having a cluster, and would absolutely shoot a hole in the cluster's security model. Any ssh to Nodes should be reserved for "in case of emergency, break glass" situations.

More generally speaking, a well configured Ingress controller will surface services en-masse and also has the very pleasing side-effect of meaning your local cluster will operate exactly the same as your "for real" cluster, without any underhanded ssh-ery rules required

mdaniel
  • 31,240
  • 5
  • 55
  • 58
  • I guess this needs 'kubectl proxy' to run on the host (physical machine hosting the three virtual machines). Unfortunately at the moment I can run kubectl from inside the VM that is running the master node. – Salvatore Mar 28 '18 at 15:30
  • Yes, it does, but if you are playing with kubernetes, you are 100% going to want to have a working `kubectl` otherwise, wow, it will be painful to have to scp all the yaml (et al) into the vm, apply them with kubectl, repeat – mdaniel Mar 29 '18 at 04:55