46

I've updated Windows 10 to 2004 latest version, installed wsl2 and updated it, installed docker, and ubuntu.

When I create a simple index.php file with "Hello World" it's working perfectly ( response: 100-400ms ) but when I added my Laravel project it becomes miserable as it loads for 7sec before performing the request and the response is 4 - 7 seconds, even though PHPMyAdmin is running very smoothly ( response: 1 - 2 seconds ).

my docker-compose.yml file:

version: '3.8'
networks:
  laravel:

services:
  nginx:
    image: nginx:stable-alpine
    container_name: nginx
    ports:
      - "8080:80"
    volumes:
      - ./src:/var/www/html
      - ./nginx/default.conf:/etc/nginx/conf.d/default.conf
    depends_on:
      - php
      - mysql
      - phpmyadmin
    networks:
      - laravel

  mysql:
    image: mysql:latest
    container_name: mysql
    restart: unless-stopped
    tty: true
    ports:
      - "3306:3306"
    environment:
      MYSQL_ROOT_PASSWORD: secret
      SERVICE_TAGS: dev
      SERVICE_NAME: mysql
    networks:
      - laravel

  phpmyadmin:
    image: phpmyadmin/phpmyadmin
    restart: always
    depends_on:
      - mysql
    ports:
      - 8081:80
    environment:
      PMA_HOST: mysql
      PMA_ARBITRARY: 1

  php:
    build:
      context: .
      dockerfile: Dockerfile
    container_name: php
    volumes:
      - ./src:/var/www/html
    ports:
      - "9000:9000"
    networks:
      - laravel

  composer:
    image: composer:latest
    container_name: composer
    volumes:
      - ./src:/var/www/html
    working_dir: /var/www/html
    depends_on:
      - php
    networks:
      - laravel

  npm:
    image: node:latest
    container_name: npm
    volumes:
      - ./src:/var/www/html
    working_dir: /var/www/html
    entrypoint: ['npm']

  artisan:
    build:
      context: .
      dockerfile: Dockerfile
    container_name: artisan
    volumes:
      - ./src:/var/www/html
    depends_on:
      - mysql
    working_dir: /var/www/html
    entrypoint: ['php', '/var/www/html/artisan']
    networks:
      - laravel

I've been trying to fix this issue for 2 days but couldn't find the answer.

Thanks

Wail Hayaly
  • 1,057
  • 1
  • 15
  • 31
  • Did this not happen when you weren't using WSL 2? You could try disabling WSL 2 integration in Docker Desktop and check if the issue still persists. That would at least show whether the problem lies with WSL 2 or not. – octagon_octopus Jul 22 '20 at 17:28
  • I believe it was slightly better with hyper-v (before installing wsl2), but when I run PHPMyAdmin it works perfectly so maybe the problem is within the Laravel files as it asks me for sharing these files every time I build docker-compose. – Wail Hayaly Jul 23 '20 at 02:34
  • 2
    Ahh, I maybe see what could be going wrong. Are you mounting your files from the windows file system? It could result in very poor i/o. – octagon_octopus Jul 23 '20 at 10:30
  • 1
    Same issue here. Have you found an answer? – user3529607 Oct 24 '20 at 16:09
  • Same issue here. Laravel 14sec (mounted from windows) loading vs 500ms(copied to container). In this case we reinvented the wheel. I mounted network drive as file system 15 years ago and had same performance.... – Yszty Jul 03 '21 at 14:46
  • Did you fix it? How?, please share it – Luis Cruz Jul 09 '21 at 20:05

9 Answers9

36

It looks like you are mounting your Laravel project in your container. This could result in very poor file I/O if you are mounting these files from your Windows environment to WSL 2, since WSL 2 currently has a lot of problems accessing files that are on the Windows environment. This I/O issue exists as of July 2020, you can find the ongoing status of the issue on GitHub here.

There are three possible solutions I can think of that will resolve this issue for now.


Disable WSL 2 based engine for docker until the issue is resolved

Since this issue only occurs when WSL 2 tries to access the Windows filesystem, you could choose to disable WSL 2 docker integration and run your containers on your Windows environment instead. You can find the option to disable it in the UI of Docker Desktop here: enter image description here


Store your project in the Linux filesystem of WSL 2

Again, since this issue occurs when WSL 2 tries to access the mount points of the Windows filesystem under /mnt, you could choose to store your project onto the Linux filesystem of WSL 2 instead.


Build your own Dockerfiles

You could choose to create your own Dockerfiles and instead of mounting your project, you can COPY the desired directories into the docker images instead. This would result in poor build performance, since WSL 2 will still have to access your Windows filesystem in order to build these docker images, but the runtime performance will be much better, since it won't have to retrieve these files from the Windows environment everytime.

Pang
  • 9,564
  • 146
  • 81
  • 122
octagon_octopus
  • 1,194
  • 7
  • 10
  • Tried to copy project files to `mnt/wsl` but failed (**can't read from the source file or disk**) the same thing when I copy to images containers, I ended up disabling wsl2 and removing ubuntu. :( – Wail Hayaly Jul 24 '20 at 04:30
  • 1
    Your WSL filesystem isn't under `mnt/WSL`. It's that entire filesystem itself. – octagon_octopus Jul 24 '20 at 09:19
  • Could you please elaborate on "Store your project in WSL 2" How can I do it? – user3529607 Oct 24 '20 at 16:08
  • 2
    @user3529607 Within WSL you have the Linux filesystem (e.g. directory `\\wsl$\Ubuntu-18.04`) and the Windows filesystem as a mount point (e.g. `\\wsl$\Ubuntu-18.04\mnt\c`). In a nutshell, place your project files somewhere other than a subdirectory of `/mnt/`, otherwise there is a good chance you will place your project files within the Windows filesystem, which is what causes the bad IO. – octagon_octopus Oct 25 '20 at 23:39
  • How this disabled on windows 10 home? – Dmitry Leiko Feb 18 '21 at 09:35
  • Can some one explain all the above steps in an easy matter/words – Gul Muhammad Mar 28 '21 at 09:23
  • Points 2 and 3 are fine for _running_ applications but what about _developing_ applications on Windows? Do we have to edit, save, and copy the file into the container each time? Sticking with option 1 until they have this resolved. – waterloomatt Apr 18 '21 at 00:11
  • 4
    Disable WSL 2 based engine works for me, thanks ! – Bechilled Apr 20 '21 at 21:39
  • @waterloomatt I'm not following completely. Point 2 and 3 shouldn't require you to edit, save and copy files into a container each time. Point 2 is just as simple as moving your project to a different directory and continuing to develop from there, although you will probably need an editor that supports developing in WSL to continue working (e.g. VSCode). Point 3 uses the COPY instruction in your Dockerfile, so you aren't copying files manually. – octagon_octopus Apr 21 '21 at 07:48
  • 2
    Disabling WSL2 works perfectly! Thank you very much! – Andrii Sukhoi Nov 08 '21 at 16:03
  • I tried point 2 , moved the files but after updating my docker-compose file with the new path the files seems to not be visible anymore by windows docker installation and i get an empty folder and cannot execute the project anymore. any hint ? – Claudio Ferraro Dec 27 '21 at 15:17
  • 2
    Disabling WSL2 docker engine and enabling Hyper-V in windows 10 pro edition, worked for me, thanks. :-) – Aarony Jan 06 '22 at 09:07
  • Hm...so what if the wsl2 option is "always-on" i.e. greyed out and ticked on? What is preventing switching off the option? – Steve Horvath Nov 04 '22 at 03:26
  • If the WSL2 option is "always-on" that means you don't have hyper-v available. Probably running Home edition of Windows. (If hyper-v is not turned on in windows features then it will give you a error, instead, when you turn off that option.) – Diogo Gomes Mar 09 '23 at 15:01
8

You just move all source project to folder

\\wsl$\Ubuntu-20.04\home\<User Name>\<Project Name>

The speed will be to very fast such run on Linux Native

Before

Before

After

After

DaveL17
  • 1,673
  • 7
  • 24
  • 38
Thinh Nguyen
  • 109
  • 3
  • 6
6

You are running your project on the /mnt/xxx folder, isn't it?

This is because wsl2 filesystem performance is much slower than wsl1 in /mnt.

If you want a very short solution, it is here. Works on Ubuntu 18.04 and Debian from windows store:

  1. Go to the docker settings and turn on Expose daemon on tcp://localhost:2375 without TLS and turn off Use the WSL 2 based engine.
  2. Run this command:
clear && sudo apt-get update && \
sudo curl -fsSL https://get.docker.com -o get-docker.sh && sudo sh get-docker.sh && sudo usermod -aG docker $USER && \
sudo curl -L "https://github.com/docker/compose/releases/download/1.26.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose && sudo chmod +x /usr/local/bin/docker-compose && \
echo "export PATH=\"$PATH:$HOME/.local/bin\"" >> ~/.profile && source ~/.profile && \
echo "export DOCKER_HOST=tcp://localhost:2375" >> ~/.bashrc && source ~/.bashrc && \
printf '[automount]\nroot = /\noptions = metadata' | sudo tee -a /etc/wsl.conf

I wrote the instruction on how to integrate the Docker Desktop with WSL 1 here: https://github.com/CaliforniaMountainSnake/wsl-1-docker-integration

Pang
  • 9,564
  • 146
  • 81
  • 122
James Bond
  • 2,229
  • 1
  • 15
  • 26
  • 1
    Had to downgrade from wsl2 to wsl1 with hyper-v and spent hours trying various fixes. These are the only instructions that worked. Life saving! – M P Nov 19 '21 at 18:02
5

Ok, so i got an interesting fact :))

Running docker on windows without WSL2.

A request has TTFB 5.41s. This is the index.php file. I used die() to check where the time is bigger and i found that if i use die() after terminate, the TTFB becomes ~2.5s.

<?php
/**
 * Laravel - A PHP Framework For Web Artisans
 *
 * @package  Laravel
 * @author   Taylor Otwell <taylor@laravel.com>
 */

define('LARAVEL_START', microtime(true));

require __DIR__.'/../../application/vendor/autoload.php';

$app = require_once __DIR__.'/../../application/bootstrap/app.php';

#die(); <-- TTFB 1.72s
$kernel = $app->make(Illuminate\Contracts\Http\Kernel::class);

$response = $kernel->handle(
    $request = Illuminate\Http\Request::capture()
);

$response->send();

#die(); <-- TTFB 2.67s

$kernel->terminate($request, $response);

#die(); <-- TTFB 2.74s

#if there is no die in the file then TTFB is ~6s
Pang
  • 9,564
  • 146
  • 81
  • 122
radu
  • 51
  • 1
  • I found the same thing in our project. I have no idea why `die()` does anything but it took our nginx + php stack from 6s per response to 200ms per response. – wasabi Oct 01 '20 at 13:39
4

I was facing the same issue with Laravel/Docker/Nginx on Windows 11.

I couldn't disable "Use the WSL 2 based engine", because it was grey out, even after installing Hyper-v on Windows 11 Home (tweak).

Here is the best solution I found:

1 . Copy your project into your WSL folder

  • Open Windows explorer and type the following address : \wsl.localhost\
  • Select your WSL instance, and then you can copy your project into /home/yourUsername/

The full URL will be something like : \wsl.localhost\ubuntu\home\username\yourProject

2.Start docker containers

Just open a terminal from this folder, and start your containers

eg: docker-compose up -d

3.Visual Studio Code

to open the project folder from visual studio : CTRL + P, >Remote-WSL : Open folder in WSL...

to open the project folder from the command line : code --remote wsl+Ubuntu /home/username/yourProject

Sqdz
  • 326
  • 2
  • 10
  • 1
    This has improved the performarnce greatly. But i also have to limit the memory that docker uses otherwise it eats all the ram and the system goes slow – mouchin777 Feb 04 '22 at 09:29
2

You can exclude vendor folder from compose file like

    volumes:
  - ./www:/var/www
  - vendor:/var/www/vendor
  • 1
    I excluded vendor from mounted volumes and it started running fast againg for me. It's a bit pain that I have different vendor in host and inside docker container, but it's good wnough workaround imho. – Matěj Račinský Dec 21 '21 at 15:24
1

This is really sketchy way of improving the speed but here is goes:

Problem

The speed of loading composer dependencies from a vendor dir, which is mapped from project root on Windows to a docker container via WSL 2, is currently reaaaaaly slow.

Solution

Copy vendor dir to docker image and use it as opposed to the mapped one in project root.

The project structure using MySQL database and Apache PHP 7.4 with composer auto-loading looks like this:

db
    - init.sql
dev
    - db
        - Dockerfile
        - data.sql
    - www
        - Dockerfile
        - vendor-override.php
    - docker-compose.yaml
src
    - ...
vendor
    - ...
composer.json
index.php
...

The idea here is to keep dev stuff separated from the main root dir.

dev/docker-compose.yaml

version: '3.8'
services:
  test-db:
    build:
      context: ../
      dockerfile: dev/db/Dockerfile

  test-www:
    build:
      context: ../
      dockerfile: dev/www/Dockerfile
    ports:
      - {insert_random_port_here}:80
    volumes:
      - ../:/var/www/html

Here we have two services, one for MySQL database and one for Apache with PHP, which maps web root /var/www/html to our project root. This enables Apache to see the project source files (src and index.php).

dev/db/Dockerfile

FROM mysql:5.7.24

# Add initialize script (adding 0 in front of it, makes sure it is executed first as the scripts are loaded alphabetically)
ADD db/init.sql /docker-entrypoint-initdb.d/0init.sql

# Add test data (adding 99 infront of it, makes sure it is executed last)
ADD dev/db/data.sql /docker-entrypoint-initdb.d/99data.sql

dev/www/Dockerfile

FROM php:7.4.0-apache-buster

# Install PHP extensions and dependencies required by them
RUN apt-get update -y & \
    apt-get install -y libzip-dev libpng-dev libssl-dev libxml2-dev libcurl4-openssl-dev & \
    docker-php-ext-install gd json pdo pdo_mysql mysqli ftp simplexml curl

# Enable apache mods and .htaccess files
RUN a2enmod rewrite & \
    sed -e '/<Directory \/var\/www\/>/,/<\/Directory>/s/AllowOverride None/AllowOverride All/' -i /etc/apache2/apache2.conf

# Add composer to improve loading speed since its located inside linux
ADD vendor /var/www/vendor
ADD dev/www/vendor-override.php /var/www/
RUN chmod -R 777 /var/www & \
    mkdir /var/www/html/src & \
    ln -s /var/www/html/src /var/www/src

# Expose html dir for easier deployments
VOLUME /var/www/html

I'm using official Apache buster image with PHP 7.4.

  1. Here we copy vendor dir and vendor-override.php to a dir above the webroot (/var/www) so it doesn't interfere with project root.
ADD vendor /var/www/vendor
ADD dev/www/vendor-override.php /var/www/
  1. Next we set the read write execute permissions for everybody so Apache can read it. This necessary because its outside the webroot.
chmod -R 777 /var/www
  1. Now the trick here is to make sure the composer auto-loads classes from src dir. This is solved by creating a link from /var/www/src/ to /var/www/html/src which is in our project root.
mkdir /var/www/html/src
ln -s /var/www/html/src /var/www/src

dev/www/vendor-override.php

# Override default composer dependencies to be loaded from inside docker. This is used because
# loading files over mapped volumes is really slow on WSL 2.
require_once "/var/www/vendor/autoload.php";

Simply use the vendor dir inside docker image.

index.php

$fixFile = "../vendor-override.php";

if (file_exists($fixFile))
    require_once $fixFile;
else
    require_once "vendor/autoload.php";
    
...

If vendor-override.php file is detected it is used instead of the one from the project root. This makes sure the index.php loads dir inside of the docker image which is way faster.

composer.json

{
  "autoload": {
    "psr-4": {
      "Namespace\\": ["src"]
    }
  },
  ...
}

Simple autoloading setup maps "Namespace" to src dir in project root.

Key things to note

  • index.php loads vendor-override.php instead of vendor from proect root
  • PSR-4 autoloading is solved by linking

Downside

The downside is you have to build docker image every time you update dependencies.

scetix
  • 803
  • 1
  • 8
  • 20
1

By default, opcache is not enabled in docker container php:8.0-apache. Add in dockerfile:

RUN docker-php-ext-install opcache
COPY ./opcache.ini /usr/local/etc/php/conf.d/opcache.ini

Create file opcache.ini:

[opcache]
opcache.enable=1
; 0 means it will check on every request
; 0 is irrelevant if opcache.validate_timestamps=0 which is desirable in production
opcache.revalidate_freq=0
opcache.validate_timestamps=1
opcache.max_accelerated_files=10000
opcache.memory_consumption=192
opcache.max_wasted_percentage=10
opcache.interned_strings_buffer=16
opcache.fast_shutdown=1
Krol
  • 37
  • 3
1

I just enabled Hyper-V from PowerShell:

DISM /Online /Enable-Feature /All /FeatureName:Microsoft-Hyper-V

Restarted and now it works reasonably good. I have not changed anything else whatsoever.

See more info about how to enable Hyper-V here.

Pang
  • 9,564
  • 146
  • 81
  • 122
temo
  • 612
  • 1
  • 9
  • 25