22

We're working to create a standard "data science" image in Docker in order to help our team maintain a consistent environment. In order for this to be useful for us, we need the containers to have read/write access to our company's network. How can I mount a network drive to a docker container?

Here's what I've tried using the rocker/rstudio image from Docker Hub:

This works:

docker run -d -p 8787:8787 -v //c/users/{insert user}:/home/rstudio/foobar rocker/rstudio

This does not work (where P is the mapped location of the network drive): docker run -d -p 8787:8787 -v //p:/home/rstudio/foobar rocker/rstudio

This also does not work: docker run -d -p 8787:8787 -v //10.1.11.###/projects:/home/rstudio/foobar rocker/rstudio

Any suggestions?

I'm relatively new to Docker, so please let me know if I'm not being totally clear.

KingOfTheNerds
  • 653
  • 1
  • 6
  • 19
  • Have you tried mounting the remote drive to a local directory? Then you could use the local dir in the docker run command. – Dave C May 16 '17 at 12:35
  • Thanks! Can you please help me to understand how that is different from mapping the network volume onto a "local" drive. In the example above, the network drive is the P:, but Docker won't recognize it. – KingOfTheNerds May 16 '17 at 15:17
  • 1
    No different. I just missed that part in your question. From what I can tell in further reading, Docker doesn't support mapped drives. – Dave C May 24 '17 at 14:41
  • Thanks - that's where I got too. Was hoping somebody figured out a solution. – KingOfTheNerds May 25 '17 at 21:37
  • 2
    Just a quick update for folks looking at this. As near as we've been able to figure out, this is actually not possible. As such, we've deployed our container via a Linux machine in order to solve. – KingOfTheNerds Sep 12 '17 at 23:34

5 Answers5

6

I will write my decision. I have a Synology NAS. The shared folder uses the smb protocol. I managed to connect it in the following way. The most important thing was to write version 1.0 (vers=1.0). It didn't work without it! I tried to solve the issue for 2 days.

version: "3"

services:
  redis:
    image: redis
    restart: always
    container_name: 'redis'
    command: redis-server
    ports:
      - '6379:6379'
    environment:
      TZ: "Europe/Moscow"

  celery:
    build:
      context: .
      dockerfile: celery.dockerfile
    container_name: 'celery'
    command: celery --broker redis://redis:6379 --result-backend redis://redis:6379 --app worker.celery_worker   worker --loglevel info
    privileged: true
    environment:
      TZ: "Europe/Moscow"
    volumes:
      - .:/code
      - nas:/mnt/nas
    links:
      - redis
    depends_on:
      - redis

volumes:
  nas:
    driver: local
    driver_opts:
      type: cifs
      o: username=user,password=pass,**vers=1.0**
      device: "//192.168.10.10/main"
5

I know this is relatively old - but for the sake of others - here is what usually works for me. for use - we use a windows file-server so we use cifs-utils in order to map the drive. I assume that below instructions can be applied to nfs or anything else as well.

first - need to run the container in privileged mode so that you can mount remote folders inside of the container (--dns flag might not be required)
docker run --dns <company dns ip> -p 8000:80 --privileged -it <container name and tag>

now, (assuming centos with cifs and being root in the container) - hop into the container and run:

install cifs-utils if not installed yet
yum -y install cifs-utils

create the local dir to be mapped
mkdir /mnt/my-mounted-folder

prepare a file with username and credentials
echo "username=<username-with-access-to-shared-drive>" > ~/.smbcredentials
echo "password=<password>" > ~/.smbcredentials

map the remote folder
mount <remote-shared-folder> <my-local-mounted-folder> -t cifs -o iocharset=utf8,credentials=/root/.smbcredentials,file_mode=0777,dir_mode=0777,uid=1000,gid=1000,cache=strict

now you should have access

hope this helps..

lev haikin
  • 558
  • 4
  • 17
4

I have been searching the solution the last days and I just get one working.

I am running docker container on an ubuntu virtual machine and I am mapping a folder on other host on the same network which is running windows 10, but I am almost sure that the operative system where the container is running is not a problem because the mapping is from the container itself so I think this solution should work in any SO.

Let's code.

First you should create the volume

docker volume create 
--driver local 
--opt type=cifs 
--opt device=//<network-device-ip-folder>
--opt o=user=<your-user>,password=<your-pw>
<volume-name>

And then you have to run a container from an image

 docker run 
 --name <desired-container-name> 
 -v <volume-name>:/<path-inside-container>
 <image-name>

After this a container is running with the volume assignated to it, and is mapped to . You create some file in any of this folders and it will be replicated automatically to the other.

In case someone wants to get this running from docker-compose I leave this here

services:
  <image-name>:
    build: 
      context: .
    container_name: <desired-container-name> 
    volumes:
       -  <volume-name>:/<path-inside-container>
    ...

volumes:
  <volume-name>:
    driver: local
    driver_opts: 
      type: cifs 
      device: //<network-device-ip-folder>
      o: "user=<your-user>,password=<your-pw>"

Hope I can help

  • 3
    This doesn't work for Windows containers. The daemon throws an error due to OS incompatibility: `Error response from daemon: create test_cifs_volume: options are not supported on this platform` – jrbe228 Mar 27 '22 at 17:29
0

Adding to the solution by @Александр Рублев, the trick that solved this for me was reconfiguring the Synology NAS to accept the SMB version used by docker. In my case I had to enable SMBv3

  • Your answer could be improved with additional supporting information. Please [edit] to add further details, such as citations or documentation, so that others can confirm that your answer is correct. You can find more information on how to write good answers [in the help center](/help/how-to-answer). – Community Jun 06 '22 at 01:20
  • This does not provide an answer to the question. Once you have sufficient [reputation](https://stackoverflow.com/help/whats-reputation) you will be able to [comment on any post](https://stackoverflow.com/help/privileges/comment); instead, [provide answers that don't require clarification from the asker](https://meta.stackexchange.com/questions/214173/why-do-i-need-50-reputation-to-comment-what-can-i-do-instead). - [From Review](/review/late-answers/31949012) – Besworks Jun 07 '22 at 22:08
0

I know this is old, but I found this when looking for something similar but see that it's receiving comments for others, like myself, who find it. I have figured out how to get this to work for a similar situation that took me awhile to figure out. The answers here are missing some key information that I'll include, possibly because they weren't available at the time

  1. The CIFS storage is, I believe, only for when you are connecting to a Windows System as I do not believe it is used by Linux at all unless that system is emulating a Windows environment.
  2. This same thing can be done with NFS, which is less secure, but is supported by almost everything.

you can create an NFS volume in a similar way to the CIFS one, just with a few changes. I'll list both so they can be seen side by side

When using NFS on WSL2 you 1st need to install the NFS service into the Linux Host OS. I believe CIFS requires a similar one, most likely the cifs-utils mentioned by @LevHaikin, but as I don't use it I'm not certain. In my case the Host OS is Ubuntu, but you should be able to find the appropriate one by finding your system's equivalent for nfs-common (or cifs-utils if that's correct) installation

sudo apt update
sudo apt install nfs-common

That's it. That will install the service so NFS works on Docker (It took me forever to realize that was the problem since it doesn't seem to be mentioned as needed anywhere)


If using NFS, On the network device you need to have set NFS permissions for the NFS folder, in my case this would be done at the folder folder with the mount then being to a folder inside it. That's fine. (In my case the NAS that is my server mounts to #IP#/volume1/folder, within the NAS I never see the volume1 in the directory structure, but that full path to the shared folder is shown in the settings page when I set the NFS permissions. I'm not including the volume1 part as your system will likely be different) & you want the FULL PATH after the IP (use the IP as the numbers NOT the HostName), according to your NFS share, whatever it may be.

If using a CIFS device the same is true just for CIFS permissions.

  • The nolock option is often needed but may not be on your system. It just disables the ability to "lock" files.
  • The soft option means that if the system cannot connect to the mount directory it will not hang. If you need it to only work if the mount is there you can change this to hard instead.
  • The rw (read/write) option is for Read/Write, ro (read-only) would be for Read Only

As I don't personally use the CIFS volume the options set are just ones in the examples I found, whether they are necessary for you will need to be looked into.

  • The username & password are required & must be included for CIFS
  • uid & gid are Linux user & group settings & should be set, I believe, to what your container needs as Windows doesn't use them to my knowledge
  • file_mode=0777 & dir_mode=0777 are Linux Read/Write Permissions essentially like chmod 0777 giving anything that can access the file Read/Write/Execute permissions (More info Link #4) & this should also be for the Docker Container not the CIFS host
  • noexec has to do with execution permissions but I don't think actually function here, but it was included in most examples I found, nosuid limits it's ability to access files that are specific to a specific user ID & shouldn't need to be removed unless you know you need it to be, as it's a protection I'd recommend leaving it if possible, nosetuids means that it won't set UID & GUID for newly created files, nodev means no access to/creation of devices on the mount point, vers=1.0 I think is a fallback for compatibility, I personally would not include it unless there is a problem or it doesn't work without it

In these examples I'm mounting //NET.WORK.DRIVE.IP/folder/on/addr/device to a volume named "my-docker-volume" in Read/Write mode. The CIFS volume is using the user supercool with password noboDyCanGue55

NFS from the CLI

docker volume create --driver local --opt type=nfs --opt o=addr=NET.WORK.DRIVE.IP,nolock,rw,soft --opt device=:/folder/on/addr/device my-docker-volume

CIFS from CLI (May not work if Docker is installed on a system other than Windows, will only connect to an IP on a Windows system)

docker volume create --driver local --opt type=cifs --opt o=user=supercool,password=noboDyCanGue55,rw --opt device=//NET.WORK.DRIVE.IP/folder/on/addr/device my-docker-volume

This can also be done within Docker Compose or Portainer. When you do it there, you will need to add a Volumes: at the bottom of the compose file, with no indent, on the same level as services:

In this example I am mounting the volumes

  • my-nfs-volume from //10.11.12.13/folder/on/NFS/device to "my-nfs-volume" in Read/Write mode & mounting that in the container to /nfs
  • my-cifs-volume from //10.11.12.14/folder/on/CIFS/device with permissions from user supercool with password noboDyCanGue55 to "my-cifs-volume" in Read/Write mode & mounting that in the container to /cifs
version: '3'
services:
  great-container:
    image: imso/awesome/youknow:latest
    container_name: totally_awesome
    environment:
      - PUID=1000
      - PGID=1000
    ports:
      - 1234:5432
    volumes:
      - my-nfs-volume:/nfs
      - my-cifs-volume:/cifs

volumes:
  my-nfs-volume:
   name: my-nfs-volume
   driver_opts:
      type: "nfs"
      o: "addr=10.11.12.13,nolock,rw,soft"
      device: ":/folder/on/NFS/device"
  my-cifs-volume:
    driver_opts:
      type: "cifs"
      o: "username=supercool,password=noboDyCanGue55,uid=1000,gid=1000,file_mode=0777,dir_mode=0777,noexec,nosuid,nosetuids,nodev,vers=1.0"
      device: "//10.11.12.14/folder/on/CIFS/device/"

More details can be found here:

  1. https://docs.docker.com/engine/reference/commandline/volume_create/
  2. https://www.thegeekdiary.com/common-nfs-mount-options-in-linux/
  3. https://web.mit.edu/rhel-doc/5/RHEL-5-manual/Deployment_Guide-en-US/s1-nfs-client-config-options.html
  4. https://www.maketecheasier.com/file-permissions-what-does-chmod-777-means/