Hi there! It's Jorge. In this post, I'm going to reveal a simple hack you can make on your docker-compose file if you're developing a Laravel project on a Windows machine with Docker, and experiencing really slow disk performance.
This was the docker-compose.yml file used in the development:
version: "2.3"
services:
app:
build: ./app
container_name: app
environment:
PHP_OPCACHE_ENABLE: 0
PRODUCTION: 0
ports:
- "8080:80"
depends_on:
- "database"
- "redis"
volumes:
- ./app:/app
Notice the volume ./app:/app
. This volume allows us to bind our local directory app
to the container's /app directory so that when we change the code in the host machine, that code changes immediately on the container so that we can run and test our changes.
π vendor directory - the root of all evil
When trying to figure out why the hell was this app so slow in local development environment vs the deployed production environment, I started to monitor my machine's resources at each step of the request. I noticed that serving static resources with nginx was always fast, but when the PHP backend kicked in, my SSD disk usage skyrocketed.
I started to dig in the Laravel PHP backend, and noticed that even running a simple command like composer dump-autoload
was extremely slow; so my focus was now on optimizing this step alone. Again, started to notice that my container was heavily using my SSD just for the autoload generation.
In my research on this problem, I bumped this post about how to improve performance on VS Code DevContainers (which I was already trying as well). This explanation by Microsoft said it all:
Since macOS and Windows run containers in a VM, "bind" mounts are not as fast as using the container's filesystem directly. Fortunately, Docker has the concept of a local "named volume" that can act like the container's filesystem but survives container rebuilds. This makes it ideal for storing package folders like node_modules, data folders, or output folders like build where write performance is critical.
I then changed my docker-compose.yml file to include a named volume specifically for the /vendor
directory but kept the original bind mount as well. After running docker-compose up -d
again, running composer dump-autoload
got really fast! The problem was solved. π
Here is what my final docker-compose.yml looked like:
version: "2.3"
services:
app:
build: ./app
container_name: app
environment:
PHP_OPCACHE_ENABLE: 0
PRODUCTION: 0
ports:
- "8080:80"
depends_on:
- "database"
- "redis"
volumes:
- ./app:/app
# Add this vendor named volume for disk read/write performance boost
- vendor-dir:/app/vendor
volumes:
# Don't forget to add it to the volumes section!
vendor-dir:
π» Environment
This was the development environment that I used for this legacy project:
Hardware
- Intel i7-8750H
- 24 GB of RAM
- SSD 250GB
Software
- Windows 11 Enterprise
- Docker Desktop with WSL backend
- Visual Studio Code
Hope this helped you in any way.
Hope to see you soon π
Top comments (2)
I used such an approach a few years ago, but I quickly backtracked: your vendor folder is no longer on your hard disk (which is what I was aiming for), but as a result analysis tools like phan, phpstan,... which would be launched from the host (and not from the container) would no longer be able to find the dependencies. A big problem for me.
Take a look to github.com/jakzal/phpqa, really useful docker image with plenty of QA tools.
Using your suggestion, phpqa won't be able to access vendor.
Hi @cavo789! The weird thing on my approach is that it really maintains the vendor folder because of the still existing bind mount: ./app:/app
At least on Windows it does, that is what I found surprising. Why the hell did I need a named volume for the vendor folder if I already had it all in my bind mount? I think docker internally uses the named volume but still replicates things to the bind mount. It writes using the named volume but somehow replicates it with the bind mount? Weird science here, but it works :)