36

I'm trying to use docker beta on OSX, mainly for Symfony development but the mounted volumes are incredible slow. Even for a vanilla Symfony project I get 6s page load time. That's unbearable! Has anyone found a solution to this issue? Trying to move away from vagrant but I just can't find any reasonable way to work with docker instead.

spiilmusic
  • 717
  • 3
  • 9
  • 24
Viorel
  • 1,420
  • 1
  • 17
  • 27
  • Does this answer your question? [Docker mac symfony 3 very slow](https://stackoverflow.com/questions/38163447/docker-mac-symfony-3-very-slow) – Kwadz Jun 02 '20 at 15:14
  • You can now get performance almost as fast as with Linux, using Mutagen. See [this answer](https://stackoverflow.com/a/62155414/1941316). Hope that helps. – Kwadz Jun 02 '20 at 19:15

9 Answers9

19

Okay the user Spiil gave a solution but I wanted to elaborate on the exact steps to take since I went through 12 hours trying to figure it out, but once you know how its super easy and fixes all the slow down issues!

The key here is to understand this solution creates NFS (Network File System) drives as the means of communication from the Docker Containers to your Mac instead of the standard OSX File System which is very slow currently either due to bugs or the way it works*

Follow these steps exactly.

1.) Clone this repo here (https://github.com/IFSight/d4m-nfs) in your home directory. To do this open up terminal and type cd ~

Then type git clone https://github.com/IFSight/d4m-nfs

Alternatively you can also do this in a one liner git clone https://github.com/IFSight/d4m-nfs ~/d4m-nfs

2.) Next go into the d4m-nfs folder and create a new file in the /etc folder and title it d4m-nfs-mounts.txt

3.) Add the following lines of code to this.

/Users/yourusername:/Users/yourusername:0:0

What the above does is allows you to still use relative folders with docker-compose and allows all ports to connect on it hence the 0:0.

EDIT Do not put /Volumes here!!

4.) Go to your docker preferences and do the following

enter image description here

Make sure only /tmp is showing and NOTHING ELSE. I mean nothing else it won't work if there is anything else since it will create conflicts with the NFS systems that the script will make for you later. Restart docker and docker-compose down any containers as well.

5.) Finally navigate to the d4m-nfs directory we created in step 1 and type the following command, /bin/bash d4m-nfs.sh

edit The correct way to type the command above is this as another user from the github (if-kenn) pointed out, ./d4m-nfs.sh which uses the Shebang for what shell should run it.

If done correctly there should be no errors and this should work. Please note DO NOT RUN as sh d4m-nfs.sh this will create errors and you will have to delete your exports file to start over. In fact anytime you make any changes you will have to clear your exports file.

This is what mine looks like.

enter image description here

EDIT:: IMPORTANT -- Remove the /private and volumes! This should only be users/username now!

If you see anything other than this you were not running with bash. You can quickly get to the exports file like this in Mac if you make any errors and just clear it out to start over.

Just select go to folder

enter image description here

and then type /etc/exports

This is a nice shortcut to quickly get to it and clear it out in your favorite text editor.

Also make sure no containers are running or you will get the ........ loop of death. If this loop of death continues make sure you upgrade docker and then restart your computer. Yes restart... it seemed to be the only way to get it to work on my friends computer. Refer to this (https://github.com/IFSight/d4m-nfs/issues/3)

Note to .... loop. I recently found another solution. Make sure you are NOT logged as root, and make sure you pulled the git repo into your users ~ folder not the root ~ folder. In otherwords, it should be in Users/username.

Also, make sure /tmp folder has full write permissions since the script needs to write here or this won't work either. chmod 777 -R /tmp

6.) If you did it right when running the script it will look like this.

enter image description here

Then simply run your docker-compose up -d as usual in your symfony project folder (or whatever project you are using with docker) and everything should work... except NO MORE slow downs!

You will need to run this anytime you restart your computer or docker.

Also note if you get mounting errors showing up, you probably don't have your project stored in your Users/username directory. Remember that is where we mounted it. If your project is somewhere other than there you will need to modify the d4m-nfs-mounts.txt file accordingly.

Other Info:

enter image description here

Joseph Astrahan
  • 8,659
  • 12
  • 83
  • 154
  • Just a quick question, is this solution faster that the solution here ?https://forums.docker.com/t/how-to-speed-up-shared-folders/9322/15 I thought I might as well use the better one :) Any pointers would be helpful – AshwinKumarS May 19 '17 at 21:14
  • Yes it is because this basically is using the shared folder approach – Joseph Astrahan May 19 '17 at 21:46
  • Hi Joseph. So... It's almost 2022. Is this still best solution for d4m volumes? – spiilmusic Oct 19 '21 at 20:39
  • 1
    I'm not sure since I haven't had this issue in a long time, are you still experiencing the slow downs? – Joseph Astrahan Nov 28 '21 at 21:35
  • Yep. I totally disappointed with docker for mac. So I am now working on ubuntu under virtualbox without mount volumes from mac host. – spiilmusic Dec 15 '21 at 13:10
8

For people reading this now, maybe it's better to wait for Docker to fix this issue. A pull request has already been accepted to improve performance(https://github.com/docker/docker/pull/31047). This will be release somewhere in April 2017 and should be a big improvement.

I've tried some workarounds for Docker for Mac, but all of them had some pretty big disadvantages, mostly in usability. A good source for alternatives of the OSXFS can be found at: https://github.com/EugenMayer/docker-sync/wiki/Alternatives-to-docker-sync. Credits for Eugen Mayer for setting this up.

EDIT: First improvement is implemented in the edge release. https://github.com/docker/for-mac/issues/77 has more info on this.

Frenus
  • 780
  • 5
  • 13
5

There's a long thread with explanation from Docker Team and various workarounds.

Currently, the issue is being tracked on GitHub.

While some workarounds may be better than others, I'm afraid the ideal option for now is to switch to Linux.

Vanuan
  • 31,770
  • 10
  • 98
  • 102
4

I spent a lot of my time in searching viable solution. And I found. d4m-nfs allow you use docker volumes via nfs. In my case it gave increase performance 16 times! (1.8sec vs ~30sec)

Also d4m-nfs has quite a intricate manual, so here is another link with detailed example: https://github.com/laradock/laradock/issues/353#issuecomment-262897619

I just leave this here for other googlers.

Community
  • 1
  • 1
spiilmusic
  • 717
  • 3
  • 9
  • 24
  • This one looks promising. Wonder if there is some TL;DR saying if it's possible to achieve simple `docker run --rm -v` oneliner i.e. for `mvn install`/`grunt build` purposes – ciekawy Feb 23 '17 at 16:17
  • I get an error trying to follow the instructions can you explain step by step what you did to solve these issues? – Joseph Astrahan Mar 08 '17 at 15:11
  • My error was the following, ERROR: for dbdev Cannot start service dbdev: Mounts denied: /docker-for-mac/osxfs/#namespaces for more info. . /distribution/db_data_dev is not shared from OS X and is not known to Docker. You can configure shared paths from Docker -> Preferences... -> File Sharing. See https://docs.docker.com ERROR: Encountered errors while bringing up the project. – Joseph Astrahan Mar 08 '17 at 15:11
  • Ok I figured out the issue :), refer to this issue here, https://github.com/IFSight/d4m-nfs/issues/38 – Joseph Astrahan Mar 08 '17 at 18:23
  • 1
    I elaborated on your answer in my answer below to help other users. – Joseph Astrahan Mar 08 '17 at 19:12
2

Normaly volumes should be fast. But you can not change anything to make them faster if you dont want to change the format of your disk.

But maybe the bottleneck is the CPU or RAM. You can check that with the command docker stats. These are by default set to 2 cores and 2 GB RAM. You can change this in the Docker for Mac GUI.

Julian
  • 2,724
  • 1
  • 15
  • 21
  • 2
    Well it's not even close to native speed. Running a simple benchmark mounted volumes are about 15 slower than native. I don't think CPU or RAM are the bottle neck since this only happens when I use mounted volumes. If I put the entire code base into the container I get metal speed. CPU or RAM use is much much lower than when I use a VM. – Viorel Jul 03 '16 at 09:48
  • Yes, that with native speed is with linux, with the same format and without hypervisor. But Docker for Mac volumes are faster than VMs – Julian Jul 03 '16 at 09:54
  • Faster than VM with nfs? With NFS mounted volumes for a vanilla symfony app page load time in a VM is about 60ms instead of 6s with docker beta. – Viorel Jul 03 '16 at 09:56
  • I dont know the diffrences between NFS and HFS+, but NFS uses a complete different way, so it could be faster (And you could use the host network driver) – Julian Jul 03 '16 at 10:07
  • Ok, but how it work right now it is impossible to use it for any serious development unless there is a work around. – Viorel Jul 03 '16 at 10:29
2

I had exactly the same thing. For me using docker-bg-sync (see on GitHub) made a dramatic improvement in speed and CPU usage.

Not as nice as just mounting the volume as you have to start a new container for every sync but it does the job.

Wouter de Winter
  • 701
  • 7
  • 11
2

In latest docker 17.06.0-ce-mac18 volumes mounted with :cached seems to run quite decent.

Viorel
  • 1,420
  • 1
  • 17
  • 27
  • 3
    I don't think so. I'm using docker 17.06.0-ce-mac18 (18433) for a Ruby on Rails application. But it's speed still sucks. – Radix Jul 06 '17 at 07:40
  • but did you mount the volumes with :cached ? You have to change docker-composer.yml file or pass in cli. I tested with vanilla Symfony3 and page speed is about 200ms which is much better than 6s :) – Viorel Jul 06 '17 at 13:03
  • 1
    I've `cached` volumes by `volumes: - .:/project_name:cached` but it doesn't seems to work. Still the speed is slow. A web page takes long time to load. – Radix Jul 06 '17 at 14:11
  • Tested with a very large project and takes between 4-6s to load the page. It used to be 60s+ before. It's quite decent in my opinion. :) – Viorel Jul 10 '17 at 04:21
  • Ohhh... It is not in my case... Probably I've done something wrong, I'll check the docs. However I'm using docker-sync and it's quite good. – Radix Jul 10 '17 at 04:30
  • In my case `:cached` increased my server startup ~3 times – Oleg Khalidov Jan 26 '19 at 03:08
1

I've found that creating a CoreOS VM under Parallels, then using the Docker that is inside CoreOS is far faster than Docker for Mac (currently running Version 17.12.0-ce-mac49 (21995)).

I'm doing Linux code builds using CMAKE/Ninja/GCC and it's almost twice as fast as the exact same build from Docker for Mac.

In my case, I have a ton of library sources that are part of the container (e.g. Boost, OpenSSL), and a decent amount of C++ code that I keep local to my Mac.

This seems to be a recent development. Docker/Mac has become much slower than I remember it being a month or two ago. Maybe it's just me...

1

We overcame this issue by synchronizing the local and the docker for mac filesystem using syncthing. We built an open source tool that follows this approach, in case it helps: https://github.com/okteto/cnd

  • 1
    does it handle deletes? or you have to do a clean sync? that's the main issue with any sync strategy. – Viorel Oct 26 '18 at 12:04
  • it can handle everything that syncthing supports: https://docs.syncthing.net/users/config.html In particular, it has the option `ignoreDelete`. – Pablo Chico de Guzman Oct 27 '18 at 19:29
  • yes, deletes when you actually delete the file, but what about the scenario when you git checkout a different branch which does not contains the same files as the current branch. As far as i remember syncthing was leaving behind those files and made it impossible for me to use it. Not sure if that changed in the last year but I doubt because that's the problem with all sync strategies I've tested so far. – Viorel Oct 29 '18 at 05:28
  • I have just tested your scenario using https://github.com/okteto/cnd and doing `git checkout` deletes files on the remote server. In any case, in the context of `cnd` the remote server is a container, you could automate that the container must be recreated after every checkout. – Pablo Chico de Guzman Oct 30 '18 at 06:23
  • yea but I'm planning to use it on development to speed up slow volumes from osx. :). I'll check it. – Viorel Oct 30 '18 at 11:21