3

I'm new in docker and docker-compose. I'm using of docker-compose file with several services. I have containers and images on the local machine when I work with docker-compose and my task is to deliver them remote host. I found several solutions:

  1. I could build my images and push them to the some registry and pull them on production server. But for this option I need private registry. And as I think- registry is an unnecessary element. I wanna run countainers directly.

  2. Save docker image to tar and load it to remote host. I saw this post Moving docker-compose containersets around between hosts , but in this case I need to have shell scripts. Or I can use docker directly (Docker image push over SSH (distributed)), but in this case I'm losing the benefits of docker-compose.

  3. Use docker-machine (https://github.com/docker/machine) with general driver. But in this case I can use it for deployng only from one machine, or I need to configure certificates (How to set TLS Certificates for a machine in docker-machine). And, again, it isn't simple solution, as for me.

  4. Use docker-compose and parameter host (-H) - But in the last option I need to build images on remote host. Is it possible to build image on local mashine and push it to remote host?

  5. I could use docker-compose push (https://docs.docker.com/compose/reference/push/) to remote host, but for this I need to create registry on remote host and I need to add and pass hostname as parameter to docker compose every time.

What is the best practice to deliver docker containers to remote host?

Volodya Lombrozo
  • 2,325
  • 2
  • 16
  • 34

1 Answers1

4

Via a registry (your first option). All container-oriented tooling supports it, and it's essentially required in cluster environments like Kubernetes. You can use Docker Hub, or an image registry from a public-cloud provider, or a third-party option, or run your own.

If you can't use a registry then docker save/docker load is the next best choice, but I'd only recommend it if you're in something like an air-gapped environment where there's no network connectivity between the build system and the production systems.

There's no way to directly push an image from one system to another. You should avoid enabling the Docker network API for security reasons: anyone who can reach a network-exposed Docker socket can almost trivially root its host.


Independently of the images you will also need to transfer the docker-compose.yml file itself, plus any configuration files you bind-mount into the containers. Ordinary scp or rsync works fine here. There is no way to transfer these within the pure Docker ecosystem.

David Maze
  • 130,717
  • 29
  • 175
  • 215
  • Thank you. It sounds reasonable. Do you know the best place to use docker-machine? – Volodya Lombrozo Nov 02 '20 at 12:29
  • 1
    I'd only use Docker Machine on an older non-Linux host that can't run the Docker Desktop products. I wouldn't use it to provision a cloud instance, and it has no relation to the image-distribution problem you're asking about here. – David Maze Nov 02 '20 at 14:33