1

I have 4 nodes in 2 node pools on GKE. One of them has static ip(to access aws service though whitelist), so I tagged this node by hand using kubectl label. I found when enabling auto-upgrade, after upgrade, this node will be disappeared(destroyed / recreated?). No node has the static ip or tags, causing some pods unscheduled. So, I have some questions:

  1. Should I turn off the auto-upgrade for the particular node pool?
  2. What will happen if node pools runs differrent version of k8s?
  3. Is there a best pratice to my situation?
chux0519
  • 51
  • 1
  • 1
  • 5
  • What is the Master version? Is your node pool version different as the other effected node pool? What do you mean 'no node has the static IP or tag'? You mean just one node in the node pool effected? I invite you to take a look at this [page](https://cloud.google.com/kubernetes-engine/docs/how-to/creating-managing-labels) that provide you an overview of cluster labels in GKE. – Milad Tabrizi Nov 17 '18 at 01:29
  • Say, I have 2 node pools. Node pool A has node A-1, node pool B has B-1 and B-2. I only have one ip which is allowed to reach aws service. So I bind the ip to A-1, and I tagged A-1 by hand. When enabling auto upgrade, after upgrading, A-1's tag and ip will change, I have to tag again. So, I just turn off the auto-upgrade of Node A. The master version is 1.9.7-gke.11 now, A-1 is on version 1.9.7-gke.7, B-1 and B-2 are both 1.9.7-gke.11 It seems there's nothing wrong till now. – chux0519 Nov 19 '18 at 08:18

1 Answers1

1

I recommend you to use Cloud NAT to prevent losing your IP address. Actually, you can assign your static IP address to the Cloud NAT, then you will not lose the IP address after scaling or auto-upgrade. Cloud NAT lets your VM instances and container pods communicate with the internet using a shared, public IP address. Cloud NAT uses NAT gateway to manage those connections.

Also, manually modifying the kubernetes labels is not a good practice. I recommend you add labels in your template. To do this, make sure to add the kubernetes node labels to a node pool during node pool creation. You can only add k8s labels at the node pool level during creation, you cannot edit the node pool to add a label.

Milad Tabrizi
  • 327
  • 1
  • 7
  • Thank you very much,I think this is exactly the best practice, I will give a try. – chux0519 Nov 20 '18 at 00:41
  • I've tried Cloud NAT, It did not work when GKE node's have own external IP. I decide to use a seperate VM to simulate NAT behaviour. like https://cloud.google.com/solutions/using-a-nat-gateway-with-kubernetes-engine – chux0519 Nov 21 '18 at 07:05
  • Cloud NAT only works with private clusters (no external IP). Cloud NAT as a whole will only work if your VMs have no external IP – Patrick W Nov 21 '18 at 21:54
  • @PatrickW Thank you for the information, I noticed the cluster I have is not a private cluster, so, use another VM and add some routing rules would be easier, but I have to make sure the VM is always alive. – chux0519 Nov 23 '18 at 01:46
  • You will need to keep the NAT VM instance running at all times. You can also follow [this guide by Google](https://cloud.google.com/solutions/using-a-nat-gateway-with-kubernetes-engine) to set up a GKE w/ NAT solution. It was written before Cloud NAT was available. – Patrick W Nov 23 '18 at 15:49