2

When I am implementing the CI/CD pipeline, I am using docker, kubernetes and jenkins for implementation. And I am pushing the resulting Docker image to Dockerhub repository.

When I am pulling it is not pulling the latest from the Dockerhub.com registry. So it's not showing the updated response in my application. I added the testdeployment.yaml file like the following. And repository credentials are storing in Jenkinsfile only.

spec:
  containers:
   - name: test-kube-deployment-container
     image: "spacestudymilletech010/spacestudykubernetes:latest"
     imagePullPolicy: Always
     ports:
        - name: http
          containerPort: 8085
          protocol: TCP

Jenkinsfile

 sh 'docker build -f /var/lib/jenkins/workspace/jpipeline/pipeline/Dockerfile -t spacestudymilletech010/spacestudykubernetes:latest /var/lib/jenkins/workspace/jpipeline/pipeline'
 sh 'docker login --username=<my-username> --password=<my-password>' 
 sh 'docker push spacestudymilletech010/spacestudykubernetes:latest'

How can I identify why it is not pulling latest image from the Dockerhub.com?

halfer
  • 19,824
  • 17
  • 99
  • 186
Mr.DevEng
  • 2,651
  • 14
  • 57
  • 115
  • @LinPy - Thank you for your response sir. I am already giving the tag latest after image name. And also refering in YAML when I am pulling , So what I need to modify here ? I did not got your statement. Can you please make clear sir? – Mr.DevEng Oct 24 '19 at 10:35
  • 2
    How are you actually starting the pod? Kubernetes in some cases depends on some textual change to things like deployment specs to take action. If you have a deployment running referencing some `:latest` image, push a new copy of the image, and try to re-apply the deployment, Kubernetes has no way to notice that the Docker Hub image has changed; since the new deployment spec is identical to the old one the previous pods get left as-is. – David Maze Oct 24 '19 at 10:57
  • @DavidMaze That's a very good point. – SiHa Oct 24 '19 at 10:59
  • ...as @SiHa says in their answer, the easiest way around this is to never use `:latest`. You can pick anything reasonably unique in your Jenkinsfile to use as the tag (a timestamp, `GIT_COMMIT`, ...). – David Maze Oct 24 '19 at 11:01
  • We often use the build number, sometimes combined with the commit SHA. – SiHa Oct 24 '19 at 11:03
  • @DavidMaze - Is this better to time stamp ? Because everytime I am using latest tage when building. When I am committing something to my svn repo , it will trigger the Jenkins pipeline job. So for each update using the :latest tag from image building upto deployment. Is better to use with time stamp ? Please correct me If I am going wrong way. Thank you for your response sir. – Mr.DevEng Oct 24 '19 at 11:09

1 Answers1

6

It looks like you are repeatedly pushing :latest to dockerhub?

If so, then that's the reason for your issue. You push latest to the hub from your Jenkins job, but if the k8s node which runs the deployment pod already has a tag called latest stored locally, then that's what it will use.

To clarify - latest is just a string, it could equally well be foobar. It doesn't actually mean that docker will pull the most recent version of the container.

There are two takeaways from this:

  • It's almost always a very bad idea to use latest in k8s.
  • It is always a bad idea to push the same tag multiple times, in fact many repo's won't let you.

With regards to using latest at all. This comes from personal experience, at my place of work, in the early days of our k8s adoption, we used it everywhere. That is until we found one day that our puppet server wasn't working any more. On investigation we found that the node had died, the pod re-spun on a different node and a different latest was pulled which was a new major release, breaking things.

It was not obvious, because kubectl describe pod showed the same tag name as before so nothing, apparently, had changed.

To add an excellent point mentioned in the comments: You have ImagePullPolicy: 'Always', but if you're doing kubectl apply -f mypod.yaml, with the same tag name, k8s has no way of knowing you've actually changed the image

SiHa
  • 7,830
  • 13
  • 34
  • 43
  • Thank you for your response sir. So what about adding the tag with time stamp ? Can I get resolve this problem by time stamp tag when I pushing and pulling? . – Mr.DevEng Oct 24 '19 at 13:04
  • You can (within reason) use anything for the tag. I expect there's a character limit, and, I think docker can be funny if you use underscores vs hyphens (although that may only be for older versions, can't really remember). Having a unique (for that image) tag is your goal. How you achieve it is up to you. – SiHa Oct 24 '19 at 15:46
  • "It is always a bad idea to push the same tag multiple times." I think this requires more explanation. It's common practice to continually update stable tags, but you should also maintain locked "unique" tags that never change. Which ones you utilize depends on the deployment context. Typically production workloads use unique tags. – kthompso Nov 03 '21 at 17:55
  • @kthompso re-pushing of tags is OK, *provided that you understand the consequences*. It has many potential pitfalls, including the one being discussed here. – SiHa Nov 04 '21 at 07:31