DEV Community

Cover image for Best Security Practices for Docker in 2023
Nayan Patil
Nayan Patil

Posted on • Originally published at nayanpatil.hashnode.dev

Best Security Practices for Docker in 2023

Node.js has become a popular choice for building fast and scalable applications. However, deploying and scaling Node.js apps can be challenging, especially as your application grows. This is where Docker comes in - by containerizing your Node.js application, you can ensure consistent deployments across different environments and scale your app more easily.

In this article, we'll walk you through the best practices for dockerizing your Node.js app, including optimizing container size, using environment variables, and multistage builds.

Introduction

What is Docker?

Docker is a software platform that allows you to build, test, and deploy applications quickly. Docker packages software into standardized units called containers that have everything the software needs to run including libraries, system tools, code, and runtime.

Prerequisites

  1. You should have Docker installed on your system, you can follow this guide on official Docker docs.

  2. Basic knowledge of how docker works.

  3. Basic NodeJs Application. if you don't, follow my guide on How To create an API using Node.js, Express, and Typescript, and clone the starter project via this GitHub repository

Let's write a basic Dockerfile for this Nodejs application,

FROM node
COPY . .
RUN npm install
CMD ["npm", "start"]
Enter fullscreen mode Exit fullscreen mode

Best Security Practices for Docker in 2023 -

Always use a specific version for the base image for Dockerfile

# Use specific version
FROM node:16:17.1

COPY . .
RUN npm install
CMD ["npm", "start"]
Enter fullscreen mode Exit fullscreen mode

When creating Docker images, it is important to use specific base image versions in your Dockerfile. This is because using the latest version of a base image may introduce compatibility issues with your application or dependencies, leading to unexpected errors and security vulnerabilities.

By using a specific version of a base image, you can ensure that your application runs consistently and reliably across different environments. Additionally, using specific base image versions can also help you comply with security and regulatory requirements.

Optimize your docker image by using a smaller base image

# Use specific version
# Use alpine for smaller base image
FROM node:16.17.1-alpine

COPY . .
RUN npm install
CMD ["npm", "start"]
Enter fullscreen mode Exit fullscreen mode

Using smaller-size base images is a critical best practice for optimizing Docker images. Smaller images can have a significant impact on your application's performance, reduce your storage and bandwidth costs, and minimize the number of potential vulnerabilities.

When selecting a smaller base image, you can avoid unnecessary dependencies and configurations that are not relevant to your application, ultimately leading to faster build times and smaller image sizes. By using a smaller base image, you can also reduce the attack surface of your application and improve its overall security posture.

With this, you can create Docker images that are smaller, faster, and more secure, enabling you to deliver your application with confidence.

Specify the correct working directory in Dockerfile

# Use specific version
# Use alpine for smaller base image
FROM node:16.17.1-alpine

#Specify working directory for application
WORKDIR /usr/app
COPY . .
RUN npm install
CMD ["npm", "start"]
Enter fullscreen mode Exit fullscreen mode

When building Docker images, it is crucial to set the working directory to the appropriate location to ensure that your application's files and dependencies are correctly referenced. Setting the working directory to the wrong location can cause confusion and unexpected errors, which can delay development and deployment times.

By using the correct working directory, you can also improve the readability of your Dockerfile, making it easier for other developers to understand and maintain your code. Additionally, the correct working directory can help ensure that any subsequent commands are executed in the correct location, avoiding file path issues and other complications.

With this you can streamline your Docker workflow and reduce the risk of errors and delays, allowing you to focus on delivering high-quality applications.

Always use the .dockerignore file

# .dockerignore
node_modules
package-lock.json
yarn.lock
build
dist
Enter fullscreen mode Exit fullscreen mode

This file allows you to specify files and directories that should be excluded from the build context, which can significantly reduce the build time and the size of the resulting image.

When Docker builds an image, it starts by creating a build context that includes all the files in the directory where the Dockerfile is located. This context is then sent to the Docker daemon, which uses it to build the image. However, not all files in the directory are necessary for the build, such as temporary files, log files, or cached dependencies. These files can cause the build to be slower and result in a larger image size.

To avoid this, you can create a .dockerignore file that lists the files and directories that should be excluded from the build context. This file uses the same syntax as .gitignore, allowing you to specify patterns of files or directories to exclude. For example, you might exclude all .log files, cache directories, or build artifacts.

Copying package.json Separate from Source Code

# Use specific version
# Use alpine for smaller base image
FROM node:16.17.1-alpine

#Specify working directory for application
WORKDIR /usr/app

# Copy only files which are required to install dependencies
COPY package.json .
RUN npm install

#Copy remaining source code after installing dependancies
COPY . .
CMD ["npm", "start"]
Enter fullscreen mode Exit fullscreen mode

Copying package.json separately from the source code is a best practice for optimizing your Docker builds. By separating your application's dependencies from the source code, you can avoid unnecessary rebuilds and save time and resources.

When building a Docker image, copying the entire source code directory can be time-consuming and wasteful, especially if the source code changes frequently. Instead, by copying only the package.json file separately, Docker can leverage its layer caching capabilities to only rebuild the image when the dependencies change.

Use non root user

# Use specific version
# Use alpine for smaller base image
FROM node:16.17.1-alpine

#Specify working directory for application
WORKDIR /usr/app

# Copy only files which are required to install dependencies
COPY package.json .
RUN npm install

# Use non root user
USER node

# Copy remaining source code after installing dependancies
# Use chown on copy command to set file permissions
COPY --chwon=node:node . .
CMD ["npm", "start"]
Enter fullscreen mode Exit fullscreen mode

Running applications as root can increase the risk of unauthorized access and compromise the security of your containerized applications. By creating and running containers with non-root users, you can significantly reduce the attack surface of your applications and limit the potential damage in case of a security breach.

In addition to improving security, using non-root users can also help ensure that your Docker containers are compliant with industry security standards and regulations. For example, using non-root users is a requirement for compliance with the Payment Card Industry Data Security Standard (PCI DSS).

After using a non-root user, we need to permit to access our code files, as shown in the above example, we are using chown for copying files, this will give the non-root user to access the source code.

Multistage build for production

By using multiple stages in your Docker build process, you can reduce the size of your final image and improve its performance.

In a multistage build, each stage represents a different phase of the build process, allowing you to optimize each stage for its specific task. For example, you can use one stage for compiling your application code and another for running your application. By separating these tasks into different stages, you can eliminate unnecessary dependencies and files, resulting in a smaller, more efficient final image.

In addition to reducing image size, multistage builds can also improve security by eliminating unnecessary packages and files. This can reduce the attack surface of your Docker image and help ensure that only essential components are included.

# Stage 1: Build the application
# Use specific version
# Use alpine for smaller base image
FROM node:16.17.1-alpine AS build

#Specify working directory for application
WORKDIR /usr/app

# Copy only files which are required to install dependencies
COPY package.json .
RUN npm install
COPY . .
RUN npm run build

# Stage 2: Run the application
FROM node:16.17.1-alpine 

WORKDIR /usr/app

# set production environment
ENV NODE_ENV production

# copy build files from build stage
COPY  --from=build /usr/app/build .

# Copy necessary files
COPY  --from=build /usr/app/package.json .
COPY --from=build /usr/app/.env .

# Use chown command to set file permissions 
RUN chown -R node:node /usr/app

# Install production dependencies only
RUN npm install --omit=dev

# Use non root user
USER node

CMD ["npm","run", "start:prod"]
Enter fullscreen mode Exit fullscreen mode

In this example, the first stage (named "build") installs the necessary dependencies, copies the source code, and builds the application. The resulting artefacts are then copied to the second stage (named "run"), which only includes the necessary dependencies to run the application in production. This separation of stages helps reduce the size of the final image and ensures that only essential components are included.

Note that the --from the flag in the second COPY command refers to the first stage, allowing us to copy only the built artefacts into the final image. This is an example of how multistage builds can be used to optimize the Docker build process.

Exposing port in Dockerfile

# Stage 1: Build the application
# Use specific version
# Use alpine for smaller base image
FROM node:16.17.1-alpine AS build

#Specify working directory for application
WORKDIR /usr/app

# Copy only files which are required to install dependencies
COPY package.json .
RUN npm install
COPY . .
RUN npm run build

# Stage 2: Run the application
FROM node:16.17.1-alpine 

WORKDIR /usr/app

# set production environment
ENV NODE_ENV production

# copy build files from build stage
COPY  --from=build /usr/app/build .

# Copy necessary files
COPY  --from=build /usr/app/package.json .
COPY --from=build /usr/app/.env .

# Use chown command to set file permissions 
RUN chown -R node:node /usr/app

# Install production dependencies only
RUN npm install --omit=dev

# Use non root user
USER node

# Exposing port 8080
EXPOSE 8080

CMD ["npm","run", "start:prod"]
Enter fullscreen mode Exit fullscreen mode

By exposing the ports that your application uses, you allow other services to communicate with your container.

To export a port in Docker, you need to use the EXPOSE instruction in your Dockerfile. This instruction informs Docker that the container will listen on the specified network ports at runtime. Note that this instruction does not publish the port, but rather documents the ports that the container is expected to use.

To export the port, you need to use the -p option when running the container, specifying the external port and the internal port where your application is listening. For example, docker run -p 8080:80 my-image will publish port 80 inside the container to port 8080 on the host machine.

By exporting ports in Docker, you enable seamless communication between your container and other services, both within and outside of your Docker environment. This best practice can help ensure that your application can be accessed by other services and that it can be easily integrated into a larger system.

Conclusion

In conclusion, Docker is a powerful tool for optimizing and scaling your Node.js application.

By following these best practices for Dockerizing your Node.js app, you can create images that are optimized for performance, security, and scalability. By using a specific base image version, a smaller base image size, the correct working directory, and a non-root user, you can ensure that your images are secure and optimized for production use. Using multistage builds, configuring your app for production, and exporting ports can also help ensure that your application can be easily scaled and integrated into a larger system.

In summary, Dockerizing your Node.js app using these best practices can help you create a containerized environment that is secure, scalable, and optimized for production use. By taking advantage of Docker's powerful tools and optimizing your images, you can ensure that your Node.js app runs smoothly and efficiently, allowing you to focus on building the features and functionality that your users need. So go ahead and Dockerize your Node.js app today, and experience the benefits of a containerized environment for yourself!

Top comments (4)

Collapse
 
ahansondev profile image
Alex Hanson

Consider using node:16.17 instead of pulling the specific patch version number. If the image is versioned correctly, you'll always pull the latest version with patch fixes, which is what you generally want.

Many times I'll also just say node:16 (or python:3.10) and let it always pull the latest release for those major versions. I let the unit tests in my CI pipeline catch any breaking changes with the new container patch or minor version updates, just always run a docker pull node:16.17 when you do a dev build.

Collapse
 
nayanpatil1998 profile image
Nayan Patil

Thank you for suggestion.

Collapse
 
tbroyer profile image
Thomas Broyer

What's the reason you're ignoring the lockfiles ⁉️
I'm afraid I can't trust anyone ignoring lockfiles and therefore not using npm ci.

Collapse
 
nayanpatil1998 profile image
Nayan Patil

That's a good addition to the article. Thanks for it.