20

Several articles have been extremely helpful in understanding Docker's volume and data management. These two in particular are excellent:

However, I am not sure if what I am looking for is discussed. Here is my understanding:

  1. When running docker run -v /host/something:/container/something the host files will overlay (but not overwrite) the container files at the specified location. The container will no longer have access to the location's previous files, but instead only have access to the host files at that location.
  2. When defining a VOLUME in a Dockerfile, other containers may share the contents created by the image/container.
  3. The host may also view/modify a Dockerfile volume, but only after discovering the true mountpoint using docker inspect. (usually somewhere like /var/lib/docker/vfs/dir/cde167197ccc3e138a14f1a4f7c....). However, this is hairy when Docker has to run inside a Virtualbox VM.

How can I reverse the overlay so that when mounting a volume, the container files take precedence over my host files?

I want to specify a mountpoint where I can easily access the container filesystem. I understand I can use a data container for this, or I can use docker inspect to find the mountpoint, but neither solution is a good solution in this case.

Rob Bednark
  • 25,981
  • 23
  • 80
  • 125
Jack Palkens
  • 331
  • 1
  • 2
  • 8
  • Your question is against your rule number 1 – Xiongbing Jin Mar 19 '16 at 22:03
  • 3
    @warmoverflow I apologize for the vague language. The numbered list is only to enumerate what I already know Docker to be capable of. They are not rules I want to comply with. It is included to let you know I have done due diligence and am looking for help. – Jack Palkens Mar 19 '16 at 22:39

2 Answers2

11

The docker 1.10+ way of sharing files would be through a volume, as in docker volume create.
That means that you can use a data volume directly (you don't need a container dedicated to a data volume).

That way, you can share and mount that volume in a container which will then keep its content in said volume.
That is more in line with how a container is working: isolating memory, cpu and filesystem from the host: that is why you cannot "mount a volume and have the container's files take precedence over the host file": that would break that container isolation and expose to the host its content.

Rob Bednark
  • 25,981
  • 23
  • 80
  • 125
VonC
  • 1,262,500
  • 529
  • 4,410
  • 5,250
2

Begin your container's script with copying files from a read-only mount bind reflecting the host files to a work location in the container. End the script with copying necessary results from the container's work location back to the host using either the same or different mount point.

Alternatively to the end-of-the script command, run the container without automatically removing it at the end, then run docker cp CONTAINER_NAME:CONTAINER_DIR HOST_DIR, then docker rm CONTAINER_NAME.

Alternatively to copying results back to the host, keep them in a separate "named" volume, provided that the container had it mounted (type=volume,src=datavol,dst=CONTAINER_DIR/work). Use the named volume with other docker run commands to retrieve or use the results.

The input files may be modified in the host during development between the repeated runs of the container. Avoid shadowing them with the frozen files in the named volume. Beginning the container script with copying the input files from the host may help.

Using a named volume helps running the container read-only. (One may still need --tmpfs /tmp for temporary files or --tmpfs /tmp:exec if some container commands create and run executable code in the temporary location).

eel ghEEz
  • 1,186
  • 11
  • 24