6

I created GKE cluster with node pool, but I forgot to label the nodes... In Google Cloud Platform UI I can't edit or add Kubernetes labels for the existing node pool... How can I do it without recreating whole node pool?

The label field is unchangeable

malcolm
  • 458
  • 5
  • 14

5 Answers5

5

It isn't possible to edit the labels without recreating nodes, so GKE does not support updating labels on node pools.

In GKE, the Kubernetes labels are applied to nodes by the kubelet binary which receives them as flags passed in via the node startup script. As it is just as disruptive (or more disruptive) to recreate all nodes in a node pool as to create a new node pool, updating the labels isn't a supported operation for updating a node pool.

Robert Bailey
  • 17,866
  • 3
  • 50
  • 58
  • But I don't understand the reason why it's impossible. I just propose you to add the ability to add the taint and label after the node pool was created. There is no sense to recreate or restart the nodes in the pool to add label or taint! So what's a problem? – malcolm Mar 25 '19 at 07:51
  • The way that the labels are currently added is via kubelet arguments. If the kubelet restarts, it will re-apply the labels that it was started with, erasing any changes that you've made otherwise (as was noted in the other answer). The Kubernetes community is working on removing the ability of the kubelet to self-label, so the way this works will change in the future and once that change is in GKE I'd expect the update node pool API to support dynamically changing labels. – Robert Bailey Mar 26 '19 at 05:47
  • Cool. Let's wait. – malcolm Mar 26 '19 at 10:25
0

You can edit your node configuration, including labels, with kubectl:

kubectl edit node <your node name>

Use kubectl get node to get a list of your nodes. If you're having trouble connecting to your GKE cluster check out the docs here.

Aleksi
  • 4,483
  • 33
  • 45
  • 3
    No, this solution will not help me because if I resize my node pool to 0 and after that resize to 3 the labels will disappear because there will be new nodes. – malcolm Mar 21 '19 at 07:29
  • 1
    Good point. I think we're out of options then, since node pool update API [doesn't handle labels](https://cloud.google.com/kubernetes-engine/docs/reference/rest/v1/projects.zones.clusters.nodePools/update). – Aleksi Mar 21 '19 at 08:06
0

Make a new node pool that is the way you want, and then migrate your workload to it. Then destroy the old pool.

Depending on your workload, there could be a "blip" in your service as pods are moved to the new node.

I define two node pools: blue and green. At any given time only one pool is up.

If I need to make a change:

  1. I make sure that automation of the down node pool matches at least the config of the up pool.
  2. Then I make the change in the automation that I want in the downed pool.
  3. Then I bring up that pool.
  4. I baby-sit the migration of workloads to the new node pool using cordon / drain.
  5. Then I destroy the old pool.
  6. Then I make sure that the automation for the old , now down pool, matches the up new pool.

And I'm ready for my next change.

David

David Thornton
  • 529
  • 4
  • 4
0
kubectl label node <node_id> <label_key>:<label_value>

This would allow you to add a label to any of your already running nodes .

Raymond A
  • 763
  • 1
  • 12
  • 29
0

If you use terraform and making changes to the label then next below will work:

  lifecycle {
    ignore_changes = [
      # Since we provide `remove_default_node_pool = true`, the `node_config` is only relevant for a valid construction of
      # the GKE cluster in the initial creation. As such, any changes to the `node_config` should be ignored.
      node_config,
    ]
  }

As per answer on Label change of GKE terraform brought down entire cluster. Refer to: terraform-google-gke/modules/gke-cluster/main.tf

More information:

  1. Terraform Lifecycle Ignore changes
  2. lifecycle#ignore_changes
  3. container_cluster#taint
Alexred
  • 176
  • 1
  • 9