1

with the help of kubespray I got a running kubernetes cluster on 3 machines. 2 of them (node1, node2) are master nodes and all of them (node1, node2, node3) are worker nodes. Therefore it should meet my requirement to be a high available cluster. I wanted to test the availability when some nodes are down and how they're reacting.

The problem: I'm bringing down node2 and node3, so there is just node1 running. When I try to kubectl get nodes on node1 it returns The connection to the server 10.1.1.44:6443 was refused - did you specify the right host or port?

What's strange: When node1 and node2 (all masters) are running, the api works as it should. But when just one master is down, the api returns the message above.

I expect that node1 should work without the other master/worker nodes. Am I missing something over here?

Used: kubespray v2.11.0

Edited group_vars/all/all.yml

## Internal loadbalancers for apiservers
loadbalancer_apiserver_localhost: true
# valid options are "nginx" or "haproxy"
loadbalancer_apiserver_type: nginx
# valid values "nginx" or "haproxy"

My hosts.yml

all:
  hosts:
    node1:
      ansible_host: 10.1.1.44
      ip: 10.1.1.44
      access_ip: 10.1.1.44
    node2:
      ansible_host: 10.1.1.45
      ip: 10.1.1.45
      access_ip: 10.1.1.45
    node3:
      ansible_host: 10.1.1.46
      ip: 10.1.1.46
      access_ip: 10.1.1.46
  children:
    kube-master:
      hosts:
        node1:
        node2:
    kube-node:
      hosts:
        node1:
        node2:
        node3:
    etcd:
      hosts:
        node1:
        node2:
        node3:
    k8s-cluster:
      children:
        kube-master:
        kube-node:
    calico-rr:
      hosts: {}
algtr
  • 68
  • 9
  • Is there the same problem when u bring down node1,node3 and leave node2? Please provide information from `kubectl get nodes -o wide`. – Jakub Oct 16 '19 at 07:00
  • 1
    @jt97 I think I found my issue. The problem was stated wrong from my side. It's working when at 2 of 3 nodes are running. As soon as there is just one running node, the cluster won't work anymore. – algtr Oct 16 '19 at 09:19
  • So please update your question.Please check if your masters are untainted,you can read about control plane node isolation [here](https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/#control-plane-node-isolation) – Jakub Oct 16 '19 at 10:09

0 Answers0