2

I am using docker-compose to run multi-container software on premise (offline networks). What is the best way to deliver software updates when there is no direct connectivity to Docker registry?

I tried 2 options (both are not sufficient): 1. To deliver full Docker images created using docker save. Very inefficient - each image is above 1gb, no layers optimization. 2. To deliver customized software in addition to Docker images and map it with host volumes inside generic containers. This way I can deliver custom software only updates, and reduce time when Docker images need to be updated. Yet it is far from optimal.

Is there any way to deliver only the updated Docker layer with custom software as a file? Any other ideas?

Thanks, Meir

Meir Tseitlin
  • 1,878
  • 2
  • 17
  • 28
  • avoid shucking about images, instead get the github repository which contains the Dockerfile and do the docker build locally – Scott Stensland Jun 14 '17 at 19:34
  • @ScottStensland reasons not to do that is either you won't have the same build and testing environment across the servers, or you can't push responsibility of building to the system admin. Also, it would rush on `apt` mirrors and so on... – hurturk Jun 14 '17 at 20:07
  • get yourself a $6/month ubuntu VPS server at ovh.com and perform all your docker interactions from that remote server ... then huge Docker images are a snap ... its a life saver and FAR better than suffering from issuing Docker pull and push cmds from a laptop no matter how fast your home ISP (no home ISP can beat bandwidth of a server) ... also gives you a BEAST of a server for peanuts a month ... no need for a beefy laptop either ;[) – Scott Stensland Jun 14 '17 at 21:16
  • @ScottStensland, as mentioned, my target are OFFLINE networks so building images on target is not an option. In fact one of the reasons I am using docker is to bring ready made images without external dependencies... – Meir Tseitlin Jun 15 '17 at 07:02

1 Answers1

2

You have several options:

  • Setup a private registry within the private network and let servers use that
  • Utilize torrent protocol for going even faster, see Docket project
  • If nothing, gzip your save using --rsyncable option and use rsync instead of scp
  • Use docker-slim to reduce your final image size

I hope combination of few methods above should be good enough.

hurturk
  • 5,214
  • 24
  • 41
  • Unfortunately (as mentioned), I am talking about OFFLINE networks and not private clouds, therefore all online tools (torrent/rsync) will not help. Setting up private registry will not help either. I was not aware of docker-slim - will check it out, yet if there was an option to somehow upgrade an existing image with new layer - it would be a far better option.... – Meir Tseitlin Jun 15 '17 at 07:09
  • Hi Miro, I really meant offline approach and private network for all items above. For private registry for e.g. you don't need an internet connection, you can setup that on a local machine, and others could have that machine's IP address within the same local network. I've updated "cloud" keyword in my answer to prevent confusion. – hurturk Jun 15 '17 at 07:13
  • For private registry (actually I do use one) - I still need to bring full blown images inside offline network and push them in. So if I use a single container (or some minor amount of containers) per image it is an unreasonable infrastructure effort, which does not solve the problem. My effort is to provide a way to bring constant, ongoing software updates on premise. – Meir Tseitlin Jun 15 '17 at 07:19
  • Oh I understand, last two items should work better then. – hurturk Jun 15 '17 at 07:23