11

I'm working on the manifest of a kubernetes job.

apiVersion: batch/v1
kind: Job
metadata:
  name: hello-job
spec:
  template:
    spec:
      containers:
      - name: hello
        image: hello-image:latest

I then apply the manifest using kubectl apply -f <deployment.yaml> and the job runs without any issue.

The problem comes when i change the image of the running container from latest to something else.

At that point i get a field is immutable exception on applying the manifest.

I get the same exception either if the job is running or completed. The only workaround i found so far is to delete manually the job before applying the new manifest.

How can i update the current job without having to manually delete it first?

Riccardo
  • 1,490
  • 2
  • 12
  • 22
  • Hi, `kubectl set image --help` might works – Suresh Vishnoi Jul 24 '19 at 09:03
  • 1
    @SureshVishnoi from the help: `Possible resources include (case insensitive): pod (po), replicationcontroller (rc), deployment (deploy), daemonset (ds), replicaset (rs)` So no jobs. – Riccardo Jul 24 '19 at 09:14

1 Answers1

12

I guess you are probably using an incorrect kubernetes resource . Job is a immutable Pod that runs to completion , you cannot update it . As per Kubernetes documentation ..

Say Job old is already running. You want existing Pods to keep running, but you want the rest of the Pods it creates to use a different pod template and for the Job to have a new name. You cannot update the Job because these fields are not updatable. Therefore, you delete Job old but leave its pods running, using kubectl delete jobs/old --cascade=false.

If you intend to update an image you should either use Deployment or Replication controller which supports updates

fatcook
  • 946
  • 4
  • 16