This article was originally published a day earlier at https://maximorlov.com/automate-your-docker-deployments/
Deploying with Docker, how is it done?
Should you pull from Github and build a Docker image on the production server? Or should you push the image to the container registry at the same time you push to Github?
And btw, how do you automate all this?! Do you poll every x seconds/minutes on the production server and check for changes? That doesn't seem efficient.
Surely there must be a more elegant way to deploy Docker applications 🤔.
Spoiler alert: Yes, there is!
There are several ways to automate Docker deployments. Today you're going to learn a simple and straightforward approach.
You don't need to be an experienced sysadmin/DevOps person to follow along. If you're a frontend/backend person and new to servers, this tutorial is for you.
By the end of this tutorial, your application will be automatically deployed on every push to the master branch — no manual steps involved. If you have tests, those will run as well and if any of them fail deployment won't proceed.
We won't be using expensive or complicated infrastructure. Therefore, this approach works great for hobby projects and small-scale applications.
Goals
We're going to have automated deployments based off the master branch. We'll automate all the steps between pushing your code to the repository up and deploying an updated version of your application.
This will make sure the code on the master branch is the same code that's running on the production server, at all times.
On each commit to the master branch, the following will happen:
- Trigger a build in the CI provider
- Run tests, if any, and proceed if all tests pass
- Build and tag a Docker image
- Push image to the container registry
- Pull the image from the registry on the production server
- Stop the current container and start a new one from the latest image
Overview
A high-level overview of the steps we're going to take:
- Configure the CI/CD provider
- Write a deploy script that will:
- Build and upload a Docker image to the container registry
- Deploy image on the production server via remote SSH
In my examples, I'm going to use the following services:
- CircleCI as CI/CD provider
- Docker Hub as the container registry
Feel free to use whatever you're using already. It shouldn't be a problem to follow along. I'll explain the general concepts so that you can apply this to your setup.
If you're missing a service, I'll link to resources on how to get started with each one of them.
Requirements
To be able to follow along, there are some things you'll need:
- A containerised application. If you're using Node.js, I wrote an article on how to build a Docker image with Node.js
- A server with SSH access and basic shell knowledge
- Experience with running containers in Docker
With that out of the way, let's get started!
Continuous Integration and Continuous Deployment
What we're going to accomplish today is called Continuous Deployment (CD), and is usually coupled with Continuous Integration (CI) — automated testing. CI precedes CD in the automation pipeline to make sure broken code doesn't make it into production.
Therefore, it's sensible to have at least a basic test suite that makes sure the application starts and the main features work correctly before implementing automated deployments. Otherwise, you could quickly break production by pushing code that doesn't compile or has a major bug.
If you're working on a non-critical application, such as a hobby project, then you can implement automated deployments without a test suite.
Configure the CI/CD provider
Getting started with a CI/CD provider
If you already have a CI/CD provider connected to your repository, then you can head over to the next section.
CI/CD providers (or CI providers) sit between your code repository and your production server. They are the middlemen doing all the heavy lifting of building your application, running tests and deploying to production. You can even run cron jobs on them and do things that are not part of the CI or CD pipeline.
The most important thing to know is that a CI provider gives you configurable and short-lasting servers you can use. You pay for how long you're using one, or multiple, servers in parallel.
If you're not using a CI provider, I recommend starting with Github Actions. It's built into Github and therefore easy to get started. They also have a very generous free plan. Other popular providers are CircleCI and TravisCI. Since I'm more familiar with CircleCI, I'll be using them in my examples.
Configure the CI provider
We want the CI provider to run on each commit to the master branch. The provider should build our application, run tests, and if all tests have passed, execute our deploy script.
The configuration differs between providers, but the general approach is similar. You want to have a job triggered by a commit to the master branch, build the application and run the test suite, and as the last step, execute the deploy script.
In CircleCI, there are jobs and workflows. Jobs are a series of steps run on the server. A workflow runs and coordinates several jobs in parallel and/or in sequence. In jobs, you specify how to do something, and workflows describe when those jobs should run.
I've added a deploy
job that runs after the build-and-test
job. It checks out the code and runs the deploy script. We'll get to the internals of the script in the next section, but for now, you can add a simple hello world in a file named deploy.sh
sitting at the root of your project. This will allow us to test if the job runs properly.
#!/bin/sh
echo "hello world"
CircleCI looks at a configuration file in the following path: .circleci/config.yml
. Let's add it with the following contents:
version: 2.1
jobs:
# Install dependencies and run tests
build-and-test:
docker:
- image: circleci/node:12.15.0-stretch
steps:
- checkout
- run: npm ci
- run: npm test
# Build a Docker image and push to Docker Hub
# Authenticate with Digital Ocean CLI and deploy the app
deploy:
docker:
- image: circleci/node:12.15.0-stretch
steps:
- checkout
# Allow using Docker commands
- setup_remote_docker
- run: bash deploy.sh
The build-and-test
job describes a common way of installing dependencies and running tests in a Node.js project. If you want to skip tests, you can remove the test command.
With circleci/node:12.15.0-stretch
we specify which server image the CI provider should use to run our commands in. I'm using node:12.15.0-stretch
in my Dockerfile, so this image mimics the production environment. It's a CircleCI specific image that adds a few common used utilities in CI/CD pipelines such as git and docker.
Let's add the workflow that coordinates when the jobs should run. We'll append the following section to .circleci/config.yml
:
workflows:
version: 2
# Workflow name
build-deploy:
jobs:
- build-and-test
- deploy:
requires:
# Run after all tests have passed
- build-and-test
filters:
branches:
# Only deploy on pushes to the master branch
only: master
The tests will run on all branches/PRs, but we'll only deploy on the master branch.
Deploy script
After you've confirmed, the CI provider runs the deploy script on each commit to master after all the test have passed, we can move on to the deployment section.
Getting started with a container registry
In the deploy script, we'll use a container registry to push the image so we can pull it from the production server.
A container registry is for containers what Github is for repositories and NPM is for Node.js modules. It's a central place to store and manage container images.
If you're new to the Docker ecosystem, the easiest is to use Docker Hub container registry. It's free for public repositories, and you get one free private repository.
The Docker CLI uses Docker Hub as the default container registry. Therefore, it will work out of the box.
Build a Docker image and push to the container registry
The first thing we'll do in the deploy script is to build a new Docker image of the application. We give the image a name and a unique tag. A good way to generate a unique tag is to use the git hash of the latest commit. We also tag the image with the latest
tag.
The image name should follow this format: [<registryname>/]<username>/<repository>
. It has to match the username and repository name of the container registry you're going to push the image to in the next step. If you're using Docker Hub, that's the default, and you don't have to specify the container registry in the image name.
Let's replace the hello world example in deploy.sh
with the following:
#!/bin/sh
IMAGE_NAME="my-username/my-app"
IMAGE_TAG=$(git rev-parse --short HEAD) # first 7 characters of the current commit hash
echo "Building Docker image ${IMAGE_NAME}:${IMAGE_TAG}, and tagging as latest"
docker build -t "${IMAGE_NAME}:${IMAGE_TAG}" .
docker tag "${IMAGE_NAME}:${IMAGE_TAG}" "${IMAGE_NAME}:latest"
Next up, we want to upload the image to the container registry. We authenticate first using docker login
. If you're using a different registry, you pass that as an argument (e.g. docker login my-registry ...
).
We provide the username and password through environment variables set in the CI provider's dashboard. This is a safe way to work with credentials in CI/CD pipelines because they will be hidden in the output logs, and we don't have to commit them as code.
We append this to the deploy.sh
file:
echo "Authenticating and pushing image to Docker Hub"
echo "${DOCKER_PASSWORD}" | docker login -u "${DOCKER_USERNAME}" --password-stdin
docker push "${IMAGE_NAME}:${IMAGE_TAG}"
docker push "${IMAGE_NAME}:latest"
The --password-stdin
flag lets us provide the password to Docker CLI in a non-interactive/manual way. It also prevents the password from appearing in the shell's history or log files. In a CI environment, this is not an issue because the server environment is thrown away after the job finishes. However, I've included it anyway since people tend to copy/paste code in all sorts of places 🤷🏼♂️.
Deploy the image to production server via remote SSH
We have the new image pushed to the container registry, and we're ready to deploy it on the production server. We'll do that by executing several commands remotely through the SSH agent.
Authenticating with the SSH agent
Before we get to the deploy commands, we first need to make sure the SSH agent has access to the production server and works without manual interference.
With CircleCi, there are two ways you can add a private key to the CI server — through environment variables, or using a specific job step unique to CircleCI. I'm going to use an environment variable so you can take the same steps using your own CI provider. It also makes it easier to switch providers because you're not using provider-specific configuration.
To make it easier to store a multiline SSH key into an environment variable, we'll encode it into a base64 string. Assuming your private key is stored at .ssh/id_rsa
, you can do this with:
cat .ssh/id_rsa | base64
You should see a long string output:
JWNWVyQ1FjS2pl...VocXRoVA=
Save this as an environment variable in the dashboard of your CI provider. Remember, the SSH key shouldn't have a passphrase. Otherwise, the CI job will require manual input and will break the automation.
In the deploy script, we'll decode it and save it to a file. We also change the file permission to be more strict because the SSH agent won't accept private keys with loose permissions. In code, it looks like this:
# Decode SSH key
echo "${SSH_KEY}" | base64 -d > ssh_key
chmod 600 ssh_key # private keys need to have strict permission to be accepted by SSH agent
When the SSH agent tries to connect to a server it hasn't seen before, it asks if you trust the server and want to remember it in the future. This feature prevents man-in-the-middle attacks by confirming the server is who it claims to be.
Let's automate this manual step by adding the server's public key to ~/.ssh/known_hosts
in the CI server. If you have used SSH before to connect to the production server, you'll find the public key stored in the same location on your laptop.
We'll use the same technique of encoding to base64:
cat .ssh/known_hosts | grep [IP address] | base64
Replace [IP address]
with the IP address of the production server, and you should get a similar string output as before. Add it as an environment variable in your CI provider.
Let's add the following to the script:
# Add production server to known hosts
echo "${SERVER_PUBLIC_KEY}" | base64 -d >> ~/.ssh/known_hosts
Run deploy commands
Finally, we execute several deploy commands remotely through SSH.
We pull the image from the container registry first. If the repository is private, you'll have to authenticate with docker login
in the production server before you can pull the image.
Then, we stop and remove the currently running container. docker restart
won't work here since it will stop and restart the same container. We want to start another container based on the new image we just downloaded.
Next, we start a container based on the new image with the relevant flags added to the docker run
command. Adjust this as you see fit for your project.
Lastly, we clean up unused Docker objects to free space on the server. Docker is notorious for quickly taking up a lot of space.
Here's the last addition to the script:
echo "Deploying via remote SSH"
ssh -i ssh_key "root@${SERVER_HOSTNAME}" \
"docker pull ${IMAGE_NAME}:${IMAGE_TAG} \
&& docker stop live-container \
&& docker rm live-container \
&& docker run --init -d --name live-container -p 80:3000 ${IMAGE_NAME}:${IMAGE_TAG} \
&& docker system prune -af" # remove unused images to free up space
Final script
The final deploy.sh
script looks like this:
#!/bin/sh
# Stop script on first error
set -e
IMAGE_NAME="my-username/my-app"
IMAGE_TAG=$(git rev-parse --short HEAD) # first 7 characters of the current commit hash
echo "Building Docker image ${IMAGE_NAME}:${IMAGE_TAG}, and tagging as latest"
docker build -t "${IMAGE_NAME}:${IMAGE_TAG}" .
docker tag "${IMAGE_NAME}:${IMAGE_TAG}" "${IMAGE_NAME}:latest"
echo "Authenticating and pushing image to Docker Hub"
echo "${DOCKER_PASSWORD}" | docker login -u "${DOCKER_USERNAME}" --password-stdin
docker push "${IMAGE_NAME}:${IMAGE_TAG}"
docker push "${IMAGE_NAME}:latest"
# Decode SSH key
echo "${SSH_KEY}" | base64 -d > ssh_key
chmod 600 ssh_key # private keys need to have strict permission to be accepted by SSH agent
# Add production server to known hosts
echo "${SERVER_PUBLIC_KEY}" | base64 -d >> ~/.ssh/known_hosts
echo "Deploying via remote SSH"
ssh -i ssh_key "root@${SERVER_IP}" \
"docker pull ${IMAGE_NAME}:${IMAGE_TAG} \
&& docker stop live-container \
&& docker rm live-container \
&& docker run --init -d --name live-container -p 80:3000 ${IMAGE_NAME}:${IMAGE_TAG} \
&& docker system prune -af" # remove unused images to free up space
echo "Successfully deployed, hooray!"
I've added set -e
at the top of the file to stop script execution at the first command that returns with an error. Since we're running commands in a sequence, we'll run into weird errors if the script continues.
Final thoughts
If you've got this far without hiccups — Congratulations 🎉!
More realistically though, you've probably faced some issues along the way or were confused at some point. I always find it helpful to see a fully finished and working example. I made an example project based on this article. You can use it as a guideline.
Write clean code. Stay ahead of the curve.
Every other Tuesday, I share tips on how to build robust Node.js applications. Join a community of developers committed to advancing their careers and gain the knowledge & skills you need to succeed.
Top comments (0)