DEV Community

Chandra Shettigar
Chandra Shettigar

Posted on • Updated on

Create Production-ready Container Image + Docker Multi-Stage Build

Docker allows developers simplify the build, packaging, and deployment of applications.

While it is not that hard to create a Dockerfile for local development setup, making it ready for production takes a bit more effort and some level of OS & container runtime knowledge.

Here are a few things a developer should consider when writing a production-ready Dockerfile,

  1. Smaller the image size the better
  2. Multi-Stage Docker Build. Check this free short course
  3. Principle of Least Privilege: Don’t run containers as the root user
  4. Scan for Vulnerability Issues & Fix them
  5. Use Verified Base Image + Specific Version
  6. Build One Deployable Image + No Secret Configs

Smaller the image size the better

Larger images will take up more space in the registry storage and also will increase the deployment time as it adds to the network bandwidth.

There are 3 parts to this:

  1. Base image size: Choose a smaller base image, instead of the one with all possible utilities. One can find many variants of images for any technology. For example, for Ruby, you will find images built on Ubuntu, CentOS, Alpine, etc. And in those, there may be sub-variants to differentiate what you’ll need for development vs for building a production-ready image.
  2. Build artifacts: Your application build & packaging commands may copy the source code (in Ruby, Node, etc) or compiled artifacts (for example in Java). Avoid copying more than required. Exclude any tools, dependencies & libraries that are only required during the development environments or during the build-time.
  3. Unnecessary files: Exclude files or folders from being copied to the docker image that the application will never use. Use the .dockerignore for this.

Multi-Stage Docker Build

Containers don’t require the build-time tools to run the applications on any of the environments. But we do need the build & packaging tools or build-time OS dependencies and native libraries, etc. to build the application’s deployable state.

Using the multi-stage feature, we can use intermediate stages for the build tools and have the final deployable stage (image) only contain the stuff that is necessary to run the application when deployed.

The Multi-Stage Docker Build is a short tutorial walkthrough of the steps involved in building a docker image using a multi-stage Dockerfile.

# Stage 1: builder
FROM ruby:3.1.2 as builder
RUN mkdir /app
WORKDIR /app
RUN apt-get update -qq && apt-get -y install build-essential
ADD Gemfile /app/Gemfile
ADD Gemfile.lock /app/Gemfile.lock
RUN bundle install

# Stage 2: for local development
FROM builder as develop
CMD [ "bundle", "exec", "rails", "s", "-b", "0.0.0.0" ]

# Stage 3: intermediate for prod
FROM builder as prod-build
COPY . /app
RUN rails assets:precompile
RUN bundle config set --local without 'development test' && \
    bundle config set --local path /rubygems
RUN bundle install

# Stage 4: for final deployable image
FROM ruby:3.1.2-slim as prod
RUN mkdir /app
WORKDIR /app
COPY --from=prod-build /app /app
COPY --from=prod-build /rubygems /rubygems
RUN bundle config set --local without 'development test' && \
    bundle config set --local path /rubygems
CMD [ "bundle", "exec", "rails", "s", "-b", "0.0.0.0" ]
Enter fullscreen mode Exit fullscreen mode

Principle of Least Privilege: Don’t run containers as the root user

This is a security concern. Containers that run with root privileges will potentially have root access to the host machine. This may not be an issue on local development but in production environments, you don’t want to run anything, be it be container or any application process, with root privileges.

Making a container run with non-root privilege is not hard with Dockerfile.

Scan for Vulnerability Issues & Fix them

If your base image is security-hardened and verified, build & packaging steps may not introduce many vulnerabilities. Scanning of docker image generally is run as part of the CI pipeline and the tool you will use to scan may vary depending on InfoSec guidelines.

You can scan using the docker scan command which uses the Snyk service in the background. The output of any container image scanners will list all possible vulnerabilities that you should fix or in some cases, you may decide not to fix.

If your company uses docker registries such as AWS ECR, DockerHub, etc. consider using the Scan feature that the registry will trigger scanning for every docker push.

Use Verified Base Image + Specific Version

Developers mostly use images from the DockerHub. In some companies, there may be constraints pull base images for private image registry. Regardless, use the base images either from the private registry or official images from the DockerHub.

Build One Deployable Image + No Secret Configs

Some development teams end up creating environment specific docker images, mainly to bake in the environment-specific configs, including secret configs such as API keys and TLS certificates. Avoid creating images for each environments and avoid copying secret configs to the deployable image.

Whether the secret config data is for the build-time or for the run-time, never copy them into the Docker Image you build.

During the build time, you may require some secret tokens, certificates, etc. for things like connecting to the service where you pull the dependencies from. For example, you may require an auth token to pull private Node packages from npm.org. Or you may have cross-git-repository dependencies that may require an SSH key, etc.

One option is BuildKit which allows parameterizing the build-time secret configs in the docker build command.

Runtime secret configs such as database passwords, API keys, TLS certificates, etc. should never be packaged into the docker image you build and push to the docker registry. Such secret configs should be supplied when the containers are started up.

# syntax=docker.io/docker/dockerfile:1

FROM ruby:3.1.2

RUN mkdir /app
WORKDIR /app

RUN apt-get update -qq && \
    apt-get -y install build-essential

ADD Gemfile /app/Gemfile
ADD Gemfile.lock /app/Gemfile.lock

RUN --mount=type=secret,id=some.key bundle install

CMD [ "bundle", "exec", "rails", "s", "-b", "0.0.0.0" ]
Enter fullscreen mode Exit fullscreen mode
DOCKER_BUILDKIT=1 docker build --secret id=some.key,src=./keys/some.key .
Enter fullscreen mode Exit fullscreen mode

Once the image is built and pushed to the registry, developers should be able to pull that image and run it locally. Copying the secret configs to image violates security compliance. However, restricting developers from pulling the prod-ready image from registry is not a solution to comply with security guidelines.

That's it

That's it for this article. I hope you found it helpful in understanding the benefits of using Docker multi-stage builds for creating production-ready container images. Try it out and let me know how it works for you!

Ready to take your skills to the next level in deploying applications to Kubernetes and creating infrastructure on AWS/EKS? Check out this Get Started course on Kubernetes. In this comprehensive course, you'll learn how to write infrastructure code, create Kubernetes clusters on AWS EKS, and deploy multiple microservices to Kubernetes cluster.

Top comments (0)