3

I have a set of pods in my kubernetes environment that are acting as "buffered resources" (identified by not having a certain label).

In my application (using kubernetes-client), I like to check if a buffered resource is available and if so, add a label so that it is no longer considered for other requests.

However, given parallelism, a pod that is marked as a buffered resource, might be reserved by multiple threads at the same time, leading to all kinds of issues in the application.

Without locking the requests being made to kubernetes, is there a safe way to add a label only if its key does not exist already (and fail otherwise)?

I'm using io.fabric8.kubernetes.client and the code to update labels is more or less:

kubernetesClient.services().inNamespace(namespace).withName(resourceName).edit() //
        .editMetadata()
        /**/.addToLabels(Collections.unmodifiableMap(labels)) //
        .endMetadata() //
        .done();

What is the best approach to handle concurrency when talking to the kubernetes api?

Edit: I see that k8s has ResourceVersion but from my first tests this does not seem to work as expected:

The following query does NOT fail but succeeds and even assigns a new resource-version:

kubernetesClient.services().inNamespace(namespace).withName(resourceName).edit() //
        .editMetadata()
        .withResourceVersion("13213414141") // definitely does not match existing one
        /**/.addToLabels(Collections.unmodifiableMap(labels)) //
        .endMetadata() //
        .done();

Edit2: The kubectl equivalent is something like:

kubectl label pods mypod foo=bar --namespace my-name --resource-version="313"

which will correctly throw an error "the object has been modified; please apply your changes to the latest version and try again"

Frame91
  • 3,670
  • 8
  • 45
  • 89
  • Did you try the api call with the same 313 number? or N-1 of the actual resourceVersion – Matt Apr 04 '20 at 09:45
  • I think you could execute 2 things in a loop. First, check whether the pod has the desired label, then add the label to the pod if the check failed. The break condition will be either the pod already has the label or the pod is updated. Consider that no other programs update the Pod, at least one of your thread will succeed, then the others will break after label checking in the next loop. – kitt Apr 06 '20 at 14:12
  • Hi @Matt - I just verified again. Any number, N-1, N, N+1 will lead to successful update. While the kubectl equivalent will fail correctly for anything but N. – Frame91 Apr 07 '20 at 00:52
  • 1
    @Kitt I do need to guarantee that this will work with horizontal scaling in mind. There can be multiple threads/machines "reserving" the resource at the same time. This needs to be prevented. Kubernetes allows this using resourceVersion - it just seems like the kubernetes-client I'm using does not support it sadly. – Frame91 Apr 07 '20 at 00:54
  • Seems I get something wrong. Sorry. – kitt Apr 07 '20 at 05:49
  • Would you mind to try the official client for Java? – kitt Apr 07 '20 at 05:57

1 Answers1

0

You can use JsonPatch test operation to be concurrency-safe. Something like

kubectl patch jobs/pi --type=json --patch='[{"op": "test", "path": "/metadata/labels/locked", "value": "false"}, {"op": "replace", "path": "/metadata/labels/locked", "value": "true"}]'

it will done atomically.

io.fabric8 java sdk doesn`t support the patch operation, but the official kubernetes-client/java does