1

I'm building many application that i want to be running on remote servers in docker containers. Imagine that i've made a project 'foo' and i've written Dockerfile for it. I build this image on my localhost and run it in container. Everything works fine, i want to run container with the same image on remote machine. Now i've next ways:

  1. I can publish it on dockerhub but image isn't small, i don't want to wait everytime when image changes (i've made some fixes). So it's not good option for me.
  2. I can create private server on a local network where my production servers are. This way is much better, because connection speed is high, and this is my independent repo. But it's really difficult to setup private repository for docker because of https. You need to add --insecure to every daemon also. You have to use docker client, build on your own system, tag it, login, and push. Too many steps...
  3. I found another way. I archive my application with dockerfile in tar.gz, upload it on remote server (nexus 3, raw repository) and then configure Jenkins to download this archive and run docker build command. This is also isn't very convenient, i have to create jobs every time for each image, jobs are equal but urls and names are different. I also run container with jenkins, is this right or i've use something like kubernet?

Can you share with you experience how you solve issues like i've described above? Am i doing it right or it's bad practise?

Alexander Kondaurov
  • 3,677
  • 5
  • 42
  • 64

1 Answers1

1

Put the code in a git server, github, stash etc, get jenkins to detech when code has changed, build it, then deploy it to a server, and if its good upload to nexus.

You can name the image whatever, e.g. version-{auto-incrementedNumber}, set that as a variable, and then use the same variable to name the archive when uploading.