This topic is about my experience with slow Docker performance on an M1 Macbook Air and my way to deal with it. I'll try to be short on text, so bear with me.
Short preface - as a developer, I've been quite happy with performance of my M1 MBA for almost a year, but there always was one thing which bothered me - slow Docker performance. I happen to use Docker on daily basis since most of my commercial projects heavily depend on it.
Docker engine on Macs had a long history of being a slow runner. If you compare Docker for Mac performance with its counterpart on a regular low specs Linux machine (or a WSL2 instance) you'd be surprised how slow Docker is even on latest M1/M2 Mac with high specs and how fast it is on Linux.
Alright, so the issue is clear - we need faster Docker on Macs, what about the solution? It's easy - use Arm Docker images instead of default x86/amd64 :)
If you have an old codebase which uses Dockerfile (or docker-compose) to kickoff, you are very likely to deal with default x86/amd64 images, because that's what is given to you by default from Docker hub, UNLESS you specify you have an arm CPU on board.
For example, if you happen to use NodeJS (like me), you would likely to have something similar in your Dockerfile:
api: image: 'node:16.17.1' ...
These are base images from official docker hub which work fine on x86 architecture (the one which you likely used previously) though perform slow on M1/M2 Macs.
Having that said, for the same NodeJS docker configuration I've changed the base image to
arm64v8/node:16.17.1. The same NodeJS Dockerfile would look the following:
api: image: 'arm64v8/node:16.17.1' ...
As for databases, I've used Postgres so my docker-compose.yml looked like the following:
postgresql: image: 'postgres:10.6' ...
This worked stupidly slow on M1, for example a 2Gb dump file took more than an hour to restore inside a docker container.
Changing base image to arm based (
arm64v8/postgres) gave me at least 2-3 times more performance.
Changing docker-compose.yml to the following did the trick:
postgresql: image: 'arm64v8/postgres' ...
The logic question to have after reading all this - where to get these arm images? The answer is - official Docker Hub pages.
Let's go though the discussed NodeJS image configuration (see above) for the sake of example. Visit the official Docker Hub NodeJS page, scroll to Supported Architectures section and pick arm64v8 (basically you search for arm64):
Navigating to the NodeJS arm64v8 image page will show you available docker image tags, that can be used inside your Dockerfiles or docker-compose.yml
As well as the "How to use this image" instructions:
From the description text we see that the template is
arm64v8/node:<version>, so in my case that would be
arm64v8/node:16.17.1 (the 16.17.1 tag was listed on supported tags)
If you happen to use other docker image (besides NodeJS or Postgres), pre-built images are likely to be availbale as well on DockerHub, so do yourself a favor and go search for them.
I hope that was useful for any of you guys.
Top comments (4)
Note: arm64v8 is NOT official images. It's a separate repository altogether. Many popular packages, including node, have linux/arm/v8 platforms for the past year or two. For node, this means the main ##.## targets as well as current are supported. Some of the distro specific variants (alpine) aren't. You can see the Tags tab for the supported repository for this.
You should only fallback to other orgs if the mainline doesn't have an armv8 version and the x86_64 version performs poorly.
With the latest OS Updates, Rosetta 2 and Docker perform much better than even in the past couple years, and a lot of x86_64 containers that didn't run before (MS-SQL as an example) now work.
Nice tip, thank you :)
Great advice, thanks! It worked.
Do you have some suggestion on how to handle that different people on the team has different laptops? Maybe put that string in an environment variable.
In my team we simply support different versions of docker-compose configurations (and Dockerfiles) for local environment. We've tried to invest time in the universal script for both but eventually we went back to separate configurations.