Multi-stage builds is a feature introduced Docker 17.05 that allows you to create multiple intermediate images from the same Dockerfile.
With multi-stage builds, you can use multiple FROM statements in your Dockerfile. Each FROM instruction can use a different base, and each of them begins a new stage of the build. You can selectively copy artifacts from one stage to another, leaving behind everything you don’t want in the final image. You can read more about Multi-stage builds here.
This is very useful for example, to not include your application build dependencies in your final image, allowing you to have a much more smaller image.
Having a single binary in production image is great, but what about development? You will probably need your build dependencies to be present, and its recommended to have the same Dockerfile for both production and development.
For some time, It wasn't really clear how to do this, but its just one flag away.
The trick is to use "target" flag of the build command that allows you to specify which stage you want to stop your build.
Consider the following Dockerfile, which is responsible for building a Jekyll based static site.
FROM ruby:2.5.1-alpine3.7 AS build-env RUN apk update && apk add --no-cache nodejs build-base RUN apk add yarn --no-cache --repository http://dl-3.alpinelinux.org/alpine/v3.8/community/ --allow-untrusted RUN mkdir -p /app WORKDIR /app COPY Gemfile Gemfile.lock ./ RUN bundle install -j 4 COPY . ./ COPY package.json yarn.lock ./ RUN yarn install RUN make _site VOLUME /app FROM nginx:1.13.0-alpine WORKDIR /usr/share/nginx/html COPY --from=build-env /app/_site ./ EXPOSE 80
As you can see, this Dockerfile has 2 "FROM" instructions. Each of them represents a stage in a Multi-stage build.
In the first stage, I install all the necessary tools to build a Jekyll application like ruby and bundler and yarn for frontend dependencies required by this specefic site.
If I build the image now, the final image will just have the generated site.
This is great for production use, but during development I dont want to have to build the site everytime I do a change, and want to have nice things like "hot-reload" and "on the fly" assets compilation.
Thats where the "target" flag enters in action. This flag allows you to specify in which stage do you want your build to stop. so if you specify:
docker build . --target=build-env
You will have an image exactly with the contents of that stage. With docker-compose its even simpler.
version: '3.4' services: web: build: context: . target: build-env volumes: - .:/app ports: - '8082:80' command: 'env PORT=4001 HOST=0.0.0.0 yarn run dev'
Note: target requires compose version > 3.4.
So when running
in my dev environment I will end up with an image with all the development dependencies and the source code mounted as volume, where I can use live reload to immediately see my changes.
And thats it.
Understanding how Multi-stage builds works, opens your mind for a lot of possible use cases, like for example having a stage that installs your test dependencies to run unit tests, before the production build.
Top comments (1)
It's not clear what the VOLUME /app directive on Dockerfile is helping with. What's the goal of "exposing" the /app folder from within the container to the host? At which step are you (re)using that volume?
Also, in your docker-compose file you have:
I understand that, for developing purposes (live reloading, etc), this directive will mount the project folder (from host) inside the container, "overwriting" the /app folder that you initially copied inside the image (for building purposes). Right?
What I'm missing is: how do you get all yarn dependencies installed when doing development tasks? Is "yarn run dev" enough to install them?
Yes, I see that you run "yarn install" during image build (for building/deploying purposes). But whatever that command install inside /app (assuming yarn stuff gets installed right there) is "overwritten" by the /app volume you are mounting for development purposes.
If you could clarify that (and maybe point to a Github repo that applies this workflow), that would be great.