13

can I run a job and a deploy in a single config file/action Where the deploy will wait for the job to finish and check if it's successful so it can continue with the deployment?

  • yes you can, you could use [InitContainer](https://kubernetes.io/docs/concepts/workloads/pods/init-containers/) for instance, I'm writing an answer about it, if you wish to share more details about what kind of job I can tailor to answer to fit your case better. Is this job running Inside or Outside Kubernetes? – Will R.O.F. Mar 18 '20 at 13:14
  • @willrof Its a migration job, meaning, It executes Sequelize migrations to DB on nodejs project and after that, if all migrations pass successfully I want the deploy to start – Igor Igeto Mitkovski Mar 18 '20 at 13:42
  • @willrof I can't upvote you because I don't have enough reputation Also, I cant make your example work I am getting this error Error from server (BadRequest): container "container-name" in pod "container-name-6db8f67df5-8r8wp" is waiting to start: PodInitializing – Igor Igeto Mitkovski Mar 22 '20 at 09:36
  • I believe I made it work, you were missing the service for the app, thank you for the help and can you provide the link to the Kubernetes explanation where you copied your answer from, thank you – Igor Igeto Mitkovski Mar 22 '20 at 10:13
  • I'm glad I helped you The link is the same from the beginning of my answer, here: https://kubernetes.io/docs/concepts/workloads/pods/init-containers/#init-containers-in-use – Will R.O.F. Mar 22 '20 at 10:36
  • @willrof Thank you, I have just one more, maybe not so important problem For some reason, the initContainer won't accept restart policy Never. If I write any invalid value I will get an error saying the only valid values are Always, Never, OnFailure but if I use Never or OnFailure I will get an error saying that only Always is valid value... Do you know anything about this? – Igor Igeto Mitkovski Mar 22 '20 at 10:47
  • Remember that `restartPolicy is a Pod field, not container. It applies to every container inside a pod. Maybe that's why you are getting this error. if the question is related, paste your yaml on your question and I can check for you. – Will R.O.F. Mar 23 '20 at 13:02
  • @willrof its on the initContainers level initContainers: ..... ..... restartPolici: Always – Igor Igeto Mitkovski Mar 23 '20 at 16:46
  • I added a `restartPolicy: Always` field to the pod yaml on the answer to demonstrate where it should be positioned. It should not be under `Containers` or `initContainers`, it should be on the upper level under `spec.template.spec` because it's valid to all containers inside the pod. There you can change to Never or OnFailure. – Will R.O.F. Mar 23 '20 at 19:13
  • @willrof yep, it's on that level but it won't accept any other value than Always – Igor Igeto Mitkovski Mar 24 '20 at 08:39
  • I'll try to reproduce it, and let you know, it should indeed accept other values. – Will R.O.F. Mar 25 '20 at 12:59
  • I've made some research, indeed Deployment accepts only "Always" value, it's been discussed on this issue: [Github Kubernetes Issue #24725](https://github.com/kubernetes/kubernetes/issues/24725#issuecomment-511799121). When we create a pod, we have the option to set `restartpolicy` to Never or OnFailure but indeed a deployment does not have that option. This is not really part of your original question, the restart Policy Always seems like the correct for your case, since you want to just proceed if the job is successful. – Will R.O.F. Mar 26 '20 at 12:48
  • If your scenario needs to be set to `Never` or `OnFailure` the official recommendation is to use a `job` kind as specified here: [Writing a Job Spec](https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/#writing-a-job-spec), Jobs does not accept `Always` as `restartPolicy` so you have between this funcions depending on your need. I hope to have helped you. – Will R.O.F. Mar 26 '20 at 12:50

2 Answers2

12

Based on the information you provided I believe you can achieve your goal using a Kubernetes feature called InitContainer:

Init containers are exactly like regular containers, except:

  • Init containers always run to completion.
  • Each init container must complete successfully before the next one starts.

If a Pod’s init container fails, Kubernetes repeatedly restarts the Pod until the init container succeeds. However, if the Pod has a restartPolicy of Never, Kubernetes does not restart the Pod.

  • I'll create a initContainer with a busybox to run a command linux to wait for the service mydb to be running before proceeding with the deployment.

Steps to Reproduce: - Create a Deployment with an initContainer which will run the job that needs to be completed before doing the deployment:

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    run: my-app
  name: my-app
spec:
  replicas: 2
  selector:
    matchLabels:
      run: my-app
  template:
    metadata:
      labels:
        run: my-app
    spec:
      restartPolicy: Always
      containers:
      - name: myapp-container
        image: busybox:1.28
        command: ['sh', '-c', 'echo The app is running! && sleep 3600']
      initContainers:
      - name: init-mydb
        image: busybox:1.28
        command: ['sh', '-c', "until nslookup mydb.$(cat /var/run/secrets/kubernetes.io/serviceaccount/namespace).svc.cluster.local; do echo waiting for mydb; sleep 2; done"]

Many kinds of commands can be used in this field, you just have to select a docker image that contains the binary you need (including your sequelize job)

  • Now let's apply it see the status of the deployment:
$ kubectl apply -f my-app.yaml 
deployment.apps/my-app created

$ kubectl get pods
NAME                      READY   STATUS     RESTARTS   AGE
my-app-6b4fb4958f-44ds7   0/1     Init:0/1   0          4s
my-app-6b4fb4958f-s7wmr   0/1     Init:0/1   0          4s

The pods are hold on Init:0/1 status waiting for the completion of the init container. - Now let's create the service which the initcontainer is waiting to be running before completing his task:

apiVersion: v1
kind: Service
metadata:
  name: mydb
spec:
  ports:
  - protocol: TCP
    port: 80
    targetPort: 9377
  • We will apply it and monitor the changes in the pods:
$ kubectl apply -f mydb-svc.yaml 
service/mydb created

$ kubectl get pods -w
NAME                      READY   STATUS     RESTARTS   AGE
my-app-6b4fb4958f-44ds7   0/1     Init:0/1   0          91s
my-app-6b4fb4958f-s7wmr   0/1     Init:0/1   0          91s
my-app-6b4fb4958f-s7wmr   0/1     PodInitializing   0          93s
my-app-6b4fb4958f-44ds7   0/1     PodInitializing   0          94s
my-app-6b4fb4958f-s7wmr   1/1     Running           0          94s
my-app-6b4fb4958f-44ds7   1/1     Running           0          95s
^C
$ kubectl get all
NAME                          READY   STATUS    RESTARTS   AGE
pod/my-app-6b4fb4958f-44ds7   1/1     Running   0          99s
pod/my-app-6b4fb4958f-s7wmr   1/1     Running   0          99s

NAME                 TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
service/mydb         ClusterIP   10.100.106.67   <none>        80/TCP    14s

NAME                     READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/my-app   2/2     2            2           99s

NAME                                DESIRED   CURRENT   READY   AGE
replicaset.apps/my-app-6b4fb4958f   2         2         2       99s

If you need help to apply this to your environment let me know.

marc_s
  • 732,580
  • 175
  • 1,330
  • 1,459
Will R.O.F.
  • 3,814
  • 1
  • 9
  • 19
9

Although initContainers are a viable option for this solution, there is another if you use helm to manage and deploy to your cluster.

Helm has chart hooks that allow you to run a Job before other installations in the helm chart occur. You mentioned that this is for a database migration before a service deployment. Some example helm config to get this done could be...

apiVersion: batch/v1
kind: Job
metadata:
  name: api-migration-job
  namespace: default
  labels:
    app: api-migration-job
  annotations:
    "helm.sh/hook": pre-install,pre-upgrade
    "helm.sh/hook-weight": "-1"
    "helm.sh/hook-delete-policy": before-hook-creation
spec:
  template:
    spec:
      containers:
        - name: platform-migration
        ...

This will run the job to completion before moving on to the installation / upgrade phases in the helm chart. You can see there is a 'hook-weight' variable that allows you to order these hooks if you desire.

This in my opinion is a more elegant solution than init containers, and allows for better control.

iquestionshard
  • 956
  • 1
  • 7
  • 17