DEV Community

Cover image for Exploring the Monorepo #5: Perfect Docker
Jon Lauridsen
Jon Lauridsen

Posted on

Exploring the Monorepo #5: Perfect Docker

Hi, let's start with a recap:

  • We have a pnpm-based monorepo that contains two apps and three libraries.
  • All those packages are Dockerized.
  • A GitHub Actions pipeline builds all packages on each commit.

Today we'll laser-focus on solving the issues we identified in the previous blog:

  1. Don't reinstall dependencies if only source-code has changed, it wastes a lot of time.
  2. Don't manually specify which individual packages to copy, it's a mess to maintain.
  3. Final images should not contain dev-dependencies, they should be as tidy and optimal as possible.

Read the previous blog for more details on how those issues came about, but now let's see about solving them.

Table Of Contents

Converging on a plan

It's critical to understand that Docker caches each line in the Dockerfile, and that the output of one line is the input of the next. So if a line generates new output all subsequent caches are invalidated. With that in mind, here's a common Docker anti-pattern that causes issue 1:

COPY . .
RUN pnpm install
Enter fullscreen mode Exit fullscreen mode

If anything changes in any file then pnpm install has to run from scratch, because the COPY . . would produce a different output. This should always be optimized so only the files necessary to install dependencies are copied in first, then dependencies are installed, and then the rest of the source-files are copied in. Something like this:

COPY package.json .
COPY pnpm-lock.yaml .
COPY pnpm-workspaces.yaml .
COPY apps/web/package.json ./apps/web/
COPY libs/types/package.json ./libs/types/
RUN pnpm install
COPY . .
Enter fullscreen mode Exit fullscreen mode

Now all steps up to and including pnpm install remain cached so long as none of those meta-files change, and so Docker will skip all those steps. This is a massive speedup.

The downside is we're now manually specifying all those meta-files ☹️. And that leads to issue 2:

Using the COPY <meta-file> construct scales poorly because we have to author each Dockerfile with explicit and detailed information about which dependencies to copy in. And by using the COPY . . construct we copy all monorepo files, which needlessly bloats the image because for this example we only need the source-files from apps/web and libs/types (it's been a while since we talked about the specific dependencies but web only depends on types).

The key insight is that pnpm already understands how dependencies depend on each other, so we should be able to leverage that. We can't use pnpm directly from Dockerfile's COPY construct, but what if we use pnpm to generate a context that only contains the files needed for a specific package? Then the Dockerfile for that package could use COPY . . but it'd actually only copy in just the right files…

And, hang on, lets consider the meta-files too. The challenge is we can't isolate all the package.json files easily so we resort to path-specific COPY commands, but what if we get really clever and create our custom context such that all the meta-files are placed in a /meta folder inside the context for easy copying, and we put the rest of the source-files in another folder?

Let's see if that'll work!

Custom Context Script

We introduced the custom context technique in the previous blog where we simply piped tar into Docker:

$ cd apps/web
$ tar -cf - ../.. | docker build -f apps/web/Dockerfile -
Enter fullscreen mode Exit fullscreen mode

Now it's time we discard the naive tar command and come up with something more bespoke.

I've made a script that takes a Dockerfile and finds just the right files needed for that package, and outputs it all as a tarball so it's a drop-in replacement for the tar command.

ℹ️ BTW, the full script is available on GitHub1s.com if you'd like to have a look.

Here's how it's used:

$ pnpm --silent pnpm-context -- --list-files apps/web/Dockerfile
Dockerfile
deps/libs/types/.gitignore
deps/libs/types/Dockerfile
deps/libs/types/libs-types.iml
deps/libs/types/package.json
deps/libs/types/src/index.ts
deps/libs/types/tsconfig.json
meta/apps/web/package.json
meta/libs/types/package.json
meta/package.json
meta/pnpm-lock.yaml
meta/pnpm-workspace.yaml
pkg/apps/web/.gitignore
pkg/apps/web/apps-web.iml
pkg/apps/web/package.json
pkg/apps/web/src/client.tsx
pkg/apps/web/src/index.ts
pkg/apps/web/src/node.d.ts
pkg/apps/web/src/pages/App.css
pkg/apps/web/src/pages/App.tsx
pkg/apps/web/src/pages/Home.css
pkg/apps/web/src/pages/Home.spec.tsx
pkg/apps/web/src/pages/Home.tsx
pkg/apps/web/src/pages/react.svg
pkg/apps/web/src/server.tsx
pkg/apps/web/tsconfig.json
pkg/apps/web/typings/index.d.ts
Enter fullscreen mode Exit fullscreen mode

Now that's a lean context! Notice how only "libs/types" and "apps/web" files are present, and the files are split across three folders: "deps", "meta", and "pkg". That's the mechanism we'll use in the Dockerfile to copy in just the meta-files, but we'll take a look at that in a moment.

Actually this context is too lean 😅: The root tsconfig.json file isn't included because pnpm has no way of knowing it's used, but our packages do depend on it. And the bin/postinstall script is also required. To fix this we can specify additional inclusion patterns using -p arguments):

$ pnpm --silent pnpm-context -- -p 'tsconfig.json' -p 'bin/' --list-files apps/web/Dockerfile
...
pkg/bin/preinstall
pkg/tsconfig.json
Enter fullscreen mode Exit fullscreen mode

ℹ️ BTW, the repository actually calls pnpm-context.mjs with a few more arguments, see the "docker:build" script in package.json on GitHub1s.com for all the details.

So now the context is good, lets see how we pipe into Docker to build an image:

$ pnpm --silent pnpm-context -- -p 'tsconfig.json' -p 'bin/'\
apps/web/Dockerfile | docker build --build-arg PACKAGE_PATH=apps/web - -t mono-web
[+] Building 3.1s (19/19) FINISHED
Enter fullscreen mode Exit fullscreen mode

It works! But let's see how the Dockerfile actually works with this new context.

Dockerfile

ℹ️ BTW, in this article I'll only show explanatory snippets/examples of the Dockerfile, but you can see the full Dockerfile on GitHub1s.com.

It's pretty straightforward to use the new custom context subfolders, here's an example of how our new Dockerfiles are structured:

ARG PACKAGE_PATH
# ↑ Specified via Docker's `--build-arg` argument
COPY ./meta .
RUN pnpm install --filter "{${PACKAGE_PATH}}..." --frozen-lockfile
# ↑ `...` selects the package and its dependencies

COPY ./deps .
RUN pnpm build --if-present --filter "{${PACKAGE_PATH}}^..."
# ↑ `^...` ONLY selects the dependencies of the package, but not the package itself

COPY ./pkg .
RUN pnpm build --if-present --filter "{${PACKAGE_PATH}}"
RUN pnpm test --if-present --filter "{${PACKAGE_PATH}}"

# Everything's built and good to go 🎉
Enter fullscreen mode Exit fullscreen mode

With this structure pnpm install only ever runs if any of the meta-files change, and the Dockerfile does not contain any manually specified package-specific paths. We've crushed issues #1 and 2! 🎉

Cache the pnpm store

It's fine we preserve the pnpm install cache as much as we can, but when it does have to run it frustratingly re-downloads every single dependency from scratch. That's very wasteful in time and bandwidth! On our own machines pnpm downloads to a persisted store so it never has to re-download a package, but that store never gets persisted inside Docker because it evaporates as soon as a meta-file changes.

But Docker has a mechanism for exactly this: It allows a RUN command to mount a folder which is persisted on the host machine, so when the command runs it has access to files from previous runs. The code for this ends up a bit complex-looking, but it's worth the performance boost so let's try it out:

ARG PACKAGE_PATH
COPY ./meta .
RUN --mount=type=cache,id=pnpm-store,target=/root/.pnpm-store\
 # ↑ By caching the content-addressable store we stop
 # downloading the same dependencies again and again.
 # Unfortunately, doing this causes Docker to place 
 # the pnpm content-addressable store on a different
 # virtual drive, which prohibits pnpm from 
 # symlinking its content to its virtual store,
 # and that causes pnpm to fall back on copying the
 # files, and… that's totally fine! Except pnpm emits 
 # many warnings that its not using symlinks, so 
 # we also must use `grep` to filter out those warnings.
 pnpm install --filter "{${PACKAGE_PATH}}..." \
     --frozen-lockfile\
 | grep --invert-match "cross-device link not permitted\|Falling back to copying packages from store"
# ↑ Using `--invert-match` to discard annoying output
Enter fullscreen mode Exit fullscreen mode

It would be nice if we could tell pnpm to be quiet when it can't symlink, but we can survive this complexity.

Strip dev-dependencies

We've reached the last issue: We're bloating the final image with dev-dependencies because we don't clean up after building apps/web inside the image. It's a waste we shouldn't allow.

The solution is to reset back to having no dependencies installed, and then only installing the production dependencies. This is pretty straightforward to do by using Docker stages:

FROM node:16-alpine AS base
# Install pnpm

FROM base AS dev
# Install all dependencies and build the package

FROM base as prod
# Install just prod dependencies
Enter fullscreen mode Exit fullscreen mode

With this approach the "prod" stage isn't affected by whatever happens in the "dev" stage. Nice! But because dev builds the package we do need some way to transfer files from dev to prod, because we need the final build code to be moved to prod stage. For that we can introduce an "assets" layer where we isolate just the files that should go into the prod stage. So we can do something like this:

FROM node:16-alpine AS base
RUN npm --global install pnpm
WORKDIR /root/monorepo

FROM base AS dev
# Install all dependencies and build the package

FROM dev AS assets
RUN rm -rf node_modules && pnpm recursive exec -- rm -rf ./node_modules ./src
# ↑ Reset back to no dependencies installed, and delete all
# src folders because we don't need source-files. 
# This way whatever files got built are left behind.

FROM base as prod
pnpm install --prod --filter "{${PACKAGE_PATH}}..."
# ↑ Install just prod dependencies
COPY --from=assets /root/monorepo .
Enter fullscreen mode Exit fullscreen mode

So here the "assets" stage isolates whatever code was generated in the dev stage, which the prod stage then copies into itself. Does it work?

$ cd apps/web
$ pnpm build
$ docker run mono-web
[razzle] > Started on port 3000
Enter fullscreen mode Exit fullscreen mode

🎉

Updating the CI Script

It's one thing to get all this working locally, but we also need to update our GitHub Actions CI script.

ℹ️ BTW, but you can see the full CI script on GitHub1s.com.

The first problem is: It won't run the pnpm-context.mjs script at all, because we never actually install the dependencies it needs. To do that we must run pnpm install just for the mono repository's root. There's an easy way to do that with the Github Action called pnpm/action-setup: It can both install pnpm and run pnpm install, so we can tell it to install dependencies for the monorepository:

      - uses: pnpm/action-setup@v2
        with:
          run_install: |
            - args: [--frozen-lockfile, --filter "exploring-the-monorepo"]
Enter fullscreen mode Exit fullscreen mode

But then we get another exciting error: The Docker build fails because we use the mount feature (to cache the pnpm store), and it turns out we need to enable "Buildkit" mode to use that. Buildkit is an upcoming set of features from Docker that aren't yet enabled by default, and the solution turns out to be rather simple: Set the environment variable DOCKER_BUILDKIT:

$ DOCKER_BUILDKIT=1 docker build
Enter fullscreen mode Exit fullscreen mode

Conclusion

The issues we set out to vanquish have been resolved 🎉. We now build images that play nice with Docker caching, the Dockerfiles are free from manually-specified dependency concerns, and the final images are very lean and optimal. Quite nice!

I feel the pnpm investment is really paying off, it was already a nice CLI to use but how amazing they also have a pretty straightforward API to use programmatically to do our dependency-graph logic!

This article's title promised "perfect", did we achieve that? Well, no, perfection is a high bar, but we've addressed all the practical concerns I've experienced so I'm happy to call it a day here. We wouldn't want to get too carried away after all 👀 (I think for some, this entire article-series is already deep into "carried away" territory).

I'd love to hear if you have any questions or comments, or if there are any directions you'd like to see explored in future articles. So please leave a comment.

Top comments (7)

Collapse
 
alexpusch profile image
Alex Puschinksy

Thanks, for the awesome post!

Theres an unfortunate side effect for pnpm decision to use a single lockfile for the entire monorepo. Imagine a monorepo with package A, and B. A commit adds an npm to package A, and changes some code in package B.
Ideally, during package B docker image build, pnpm install step should be cached - it's package.json did not change. However since in the Dockerfile we first copy the entire pnpm-lock.yaml, we invalidate package B pnpm install docker cache layer.

There's a tool called '@pnpm/make-dedicated-lockfile' that aims to create a dedicated lockfile for a specific package, but unfortunately it does not support packages that are not published to a registry - github.com/pnpm/pnpm/issues/3114#i...

Seems that a perfect Dockerfile for pnpm monorepo is still not trivial

Collapse
 
leonardhenriquez profile image
Léonard Henriquez • Edited

Great article @jonlauridsen !
You should take a look at Turborepo and particularly their new "prune" feature (released last month). It solves quite elegantly the exact problem you are talking about.
turbo.build/blog/turbo-0-4-0#exper...
turbo.build/repo/docs/handbook/dep...

Collapse
 
lgersman profile image
lars.gersmann • Edited

Hi Jon,

i really enjoyed your article a lot - awesome !

You can simplify your "list-packages" script in package.json by replacing

echo [$(pnpm -s m ls --depth -1 | tr \" \" \"\n\" | grep -o \"@.*@\" | rev | cut -c 2- | rev | sed -e 's/\\(.*\\)/\"\\1\"/' | paste -sd, - )]
Enter fullscreen mode Exit fullscreen mode

with

pnpm list --recursive '@*/*' --json | jq -rc  '[.[].name | select( . != null )]'
Enter fullscreen mode Exit fullscreen mode

it's much shorter :-)

Collapse
 
jonlauridsen profile image
Jon Lauridsen

🤯

Amazing, thanks.

Collapse
 
pierlucgxs profile image
Pier-Luc Gagnon

Note that pnpm's author recommends not using pnpm install --filter as it is buggy/unpredictable: discord.com/channels/7315995386655...

Collapse
 
jonlauridsen profile image
Jon Lauridsen

I get some Discord error when clicking it, saying there's nothing there. Weird. But yeah that same topic was also mentioned here: github.com/pnpm/pnpm/discussions/3615.

I've so far used install --filter in prod without any issues, and at this point I wouldn't be surprised if the unpredictable behavior relates to e.g. complex peer dependencies of various version or something similar, i.e. not common or random occurrences. I'd be happy to learn more though.

There's also a lengthy discussion of monorepo patterns here: github.com/pnpm/pnpm/issues/3114.

Ultimately we could choose not to use install --filter at all, or perhaps they'll remove the feature entirely, but in that event I sure hope there'll be some other way to limit the dependencies to a specific package because it's such a handy feature.

Anyway, thanks 👍

Collapse
 
sebastianuniv profile image
Sebastiaan Eddy

Hi Jon.

Very insightful article, do you know how to integrate this with docker compose as well?

Best Regards