I recently made the switch from Mac to Windows - I won't get into the reasons why, other than to mutter a few grumbles about keyboards. What I will say is that having our projects well Dockerised proved to be invaluable in making the move. Where previously I'd have lost days to getting my dev environment setup, a simple docker-compose up dev
worked seamlessly, out of the box.
My dev environment isn't the only area where Docker is valuable, of course. While it may seemingly go against convention, we generally choose to deploy our Next.js apps as Fargate services. I won't get into the many Dev Ops related reasons for this, but I will show you how we use Docker and Docker Compose to run our Next.js apps effectively, across environments...
I'm going to assume you have Docker Desktop installed, know the difference between Docker and Docker Compose, and have a working knowledge of Next.js.
With those prerequisites out of the way, let's start with our Dockerfile
:
FROM node:current-alpine AS base
WORKDIR /base
COPY package*.json ./
RUN npm install
COPY . .
FROM base AS build
ENV NODE_ENV=production
WORKDIR /build
COPY --from=base /base ./
RUN npm run build
FROM node:current-alpine AS production
ENV NODE_ENV=production
WORKDIR /app
COPY --from=build /build/package*.json ./
COPY --from=build /build/.next ./.next
COPY --from=build /build/public ./public
RUN npm install next
EXPOSE 3000
CMD npm run start
This may not look like the Dockerfile
you were expecting... This is a "multi-stage" Dockerfile, which can be used for both development and production deploys. There are various reasons you may want to do this, but the primary one is that the size of our docker images can be reduced dramatically as they only bundle the result of the final step.
Let's take a look at that first step:
FROM node:current-alpine AS base
WORKDIR /base
COPY package*.json ./
RUN npm install
COPY . .
This looks more or less like any other node related Dockerfile; it extends from the official node image, copies our package.json and installs it, then adds the working project files.
This next stage is where things get interesting - this is where we compile our next app:
FROM base AS build
ENV NODE_ENV=production
WORKDIR /build
COPY --from=base /base ./
RUN npm run build
Each stage of a multi-stage Dockerfile is self-contained, so we have to explicitly copy any files we want from the base step. This step only relates to a production build, so we're explicitly setting the NODE_ENV
to production
, copying the files from the base step, and running the build script specified in our package.json.
With our app compiled, we're on to the final step: creating a lean, production-ready image.
FROM node:current-alpine AS production
ENV NODE_ENV=production
WORKDIR /app
COPY --from=build /build/package*.json ./
COPY --from=build /build/.next ./.next
COPY --from=build /build/public ./public
RUN npm install next
EXPOSE 3000
CMD npm run start
From the previous build step, we copy our package.json, the .next
directory which contains our compiled app, and the directory which contains our public assets across. Finally, it installs the next package, uses it to start our compiled app, and exposes it at localhost:3000
. The only files this final image contains are the ones that we copied across - the essentials - keeping it super lean. We've ditched our heavy node_modules directory, among other things.
Note: |
---|
You may have noticed I specified ENV NODE_ENV=production again in this step. This is because ENV variables aren't shared between steps, so need to be duplicated. |
That's our Dockerfile done; now how do we run next dev
with this thing?
Simple: we need a docker-compose.yml
file:
version: "3.7"
x-common-props: &common-props
build:
context: ./
target: base
working_dir: /base
volumes:
- ./app:/base
- node_modules:/base/node_modules
services:
npm:
<<: *common-props
entrypoint: npm
dev:
<<: *common-props
ports:
- "3000:3000"
command: npm run dev
volumes:
node_modules:
This gives me two local services; npm
and dev
. Both use the base
step from our Dockerfile
, but:
-
npm
specifies thenpm
command as it's entry point, so I can write convenient commands likedocker-compose run npm i -s moment
. -
dev
specifies thedev
script from ourpackage.json
, so I can start the whole thing up withdocker-compose up dev
and see my app running atlocalhost:3000
.
I have some common-props
to share attributes between services, and these include mounting a volume for my node_modules
- a useful trick for sharing modules between containers and saving a lot of time.
To sum up: adding these two files to the root of any standard next.js project should have you up and running in no time - with:
- Out of the box, cross-platform development for your whole team.
- Blazing fast, fully containerized production deploys.
If you've any reservations about how performant a containerised next.js app, compared to one targetting serverless, I leave you with this timeline from the dynamic, database driven homepage of one of our projects:
Top comments (7)
I love the approach you took in order to ship a minimal image for production! Unfortunately I'm getting the following error when trying to use you example with Next v10.0.5:
Error: > Couldn't find a
pages
directory. Please create one under the project rootI don't really get why Next isn't just using what's in .next... Do you maybe have any idea what's causing this?
Nevermind, had an issue with running the production app. Fixed it and now your example works like a charm!
Please, share that fix...
Well, not as impressive as it might sound but I for some reason ran
next
instead ofnext start
🙈Thank you, it works for dev env but not for prod actually, did you forgot some parts ?
I added a prod config with the production target but the build step failed as SSG (or ISG) needs another service to be hit to fetch data. I've tried adding a "depends_on" docker-compose conf but it doesn't helped and I still have a FetchError getaddrinfo ENOTFOUND...
Just the command "RUN npm install next" adds >100MB to the image.
how would this work in a production environment? since it pretty much relies on the image being build on the server