7

I've a new Docker image and I'd like ideally to perform a smooth upgrade to it and either forget the previous deployed version or keep only the previous version but not all previously deployed versions.

Kubernetes Pods will retrieve upon being restarted the latest image if it's tagged :latest or imagePullPolicy: Always.

However unless the image tag changed, doing a kubectl apply or kubectl replace will not restart Pods and hence will not trigger pulling the latest image. Tagging it means a complicated script to always remove old tagged images (useless someone has a trick here).

Doing a kubectl rolling-update ... --image ... is possible if there is a single container per pod only.

What works and is eventually clean and always gets the latest is deleting the namespace and re-creating all pods/rc/services...

How can I ask Kubernetes to use my new images nicely even if there is more than one container per Pod?

Wernight
  • 36,122
  • 25
  • 118
  • 131

2 Answers2

5

Dirty workaround (not tested): you can scale down rc to 0 and then up to original size => it'll be "pod" restart. Or you can use 2 active(non 0 size)/passive(size 0) rc, which will be included in the same service. And you will be scaling them up/down.

Tagging it means a complicated script to always remove old tagged images (useless someone has a trick here).

Tagging is nice explicit process. Kubernetes Garbage collection will delete your old images automatically. Hopefully you know, that if you are using only latest tag, then rollback can be impossible. I recommend to set up tag system, for example :latest_stable, :latest_dev, :2nd_latest_stable, ....

These tags will be only "pointers" and your CI will be moving them. Then you can define and script some smart registry delete tag policy, e.g. all tags older than 2nd_latest stable can be deleted safely. You know your app, so you can set up policy, which will fits your needs and release policy.

Tag example - start point builds 1/2/3 (build id, git id, build time, ...) - build 1 is :production and :canary, all tags are pushed:

# docker images
REPOSITORY                                  TAG                 IMAGE ID            CREATED             VIRTUAL SIZE
image                                       3                   a21348af4283        37 seconds ago      125.1 MB
image                                       2                   7dda7c549d2d        50 seconds ago      125.1 MB
image                                       production          e53856d910b8        58 seconds ago      125.1 MB
image                                       canary              e53856d910b8        58 seconds ago      125.1 MB
image                                       1                   e53856d910b8        58 seconds ago      125.1 MB

Build 2 is going to be :canary:

# docker tag -f image:2 image:canary
# docker push image:canary
# docker images
REPOSITORY                                  TAG                 IMAGE ID            CREATED             VIRTUAL SIZE
image                                       3                   a21348af4283        6 minutes ago       125.1 MB
image                                       canary              7dda7c549d2d        6 minutes ago       125.1 MB
image                                       2                   7dda7c549d2d        6 minutes ago       125.1 MB
image                                       production          e53856d910b8        7 minutes ago       125.1 MB
image                                       1                   e53856d910b8        7 minutes ago       125.1 MB

Tests OK, build 2 is stable - it'll be :production:

# docker tag -f image:2 image:production
# docker push image:production
# docker images
REPOSITORY                                  TAG                 IMAGE ID            CREATED             VIRTUAL SIZE
image                                       3                   a21348af4283        9 minutes ago       125.1 MB
image                                       2                   7dda7c549d2d        9 minutes ago       125.1 MB
image                                       canary              7dda7c549d2d        9 minutes ago       125.1 MB
image                                       production          7dda7c549d2d        9 minutes ago       125.1 MB
image                                       1                   e53856d910b8        10 minutes ago      125.1 MB

Homework: actually build 2 is not stable -> set :production to build 1 (rollback) and :canary to build 3 (test fix in build 3). If you are using only :latest, this rollback is impossible

kubectl rolling update/rollback will use explicit :id and your cleaning script can use policy: all tags older than :production can be deleted.

Unfortunately I don't have experience with Kubernetes deployment.

Jan Garaj
  • 25,598
  • 3
  • 38
  • 59
  • 1
    GC may work for the cluster, but the Docker Registry will not be automatically cleaned. Supposing I use git commit's hash as tag (IMO simplest), it'll keep all previous commits as well. – Wernight Feb 20 '16 at 22:09
  • Thanks for the update. Would you have a more complete example where you do a full deploy and retag? Like push new `:canary`, then replace previous `:production` including the Docker push? I think the new experimental Deployment is supposed to allow those automatically but it's not yet released. – Wernight Feb 21 '16 at 21:10
  • I do know how to push the image but then from what I see I need to have multiple kubernetes.yml with different labels and first do a rolling-update then I guess after retagging, do a kubectl apply or similar but that's the part I'm not sure of. Also the registry doesn't support tagging from existing tag remotely as far as I know so it'd another 20 sec overhead per build, etc. – Wernight Feb 22 '16 at 23:02
  • @jan-garaj Your answer has many interesting ideas and somewhat useful advices, but it was unpleasant to see the snippet starting with "Homework: ..." as it looks like u r a teacher and u r giving the homework to your student. As far as i am aware this is not a correct behavior here on StackOverflow as this is a place to give answers to questions and not to provide teachings and homeworks. I am sorry if i understood this incorrectly. And another concern is the finishing line where u state that u do not have experience with k8s deployment: why do u give advices on k8s deployment then? o_O – wobmene May 02 '21 at 06:30
0

How about tagging deployment with label which value is either timestamp or commit hash and then using kubectl apply as you would usually. Changing labels in template should trigger pulling image again (if imagePullPolicy: Always is set) and rolling upgrade (depending on configuration).

vvucetic
  • 479
  • 7
  • 15