DEV Community

Cover image for How to Build Your Docker Images in AWS with Ease
Kyle Galbraith
Kyle Galbraith

Posted on

How to Build Your Docker Images in AWS with Ease

Carrying on my latest theme of implementing as much automation as possible in AWS. Today I am going to share how we can build Docker images in our CI/CD pipeline within AWS. Specifically, we are going to explore:

  • Extending our Terraform template that provisions our CI/CD pipeline to provision an AWS Elastic Container Registry (ECR).
  • Creating a simple Dockerfile for a barebones ExpressJS API.
  • Using docker build, tag, and push inside of our buildspec.yml file to publish our latest image to ECR.
  • Pulling the latest image from our registry and running it locally.

Now that we have the lay of the land, let's talk about how we can extend our usual CI/CD Terraform template to support building Docker images.

Incorporating ECR into our CI/CD Pipeline

To get started we first need to create our Terraform template that provisions our CI/CD template. We can do this using the terraform-aws-codecommit-cicd module that we have seen in a previous post.

The full template can be found here.

variable "image_name" {
  type = "string"
}

module "codecommit-cicd" {
  source                    = "git::https://github.com/slalompdx/terraform-aws-codecommit-cicd.git?ref=master"
  repo_name                 = "docker-image-build"                                                             # Required
  organization_name         = "kylegalbraith"                                                                  # Required
  repo_default_branch       = "master"                                                                         # Default value
  aws_region                = "us-west-2"                                                                      # Default value
  char_delimiter            = "-"                                                                              # Default value
  environment               = "dev"                                                                            # Default value
  build_timeout             = "5"                                                                              # Default value
  build_compute_type        = "BUILD_GENERAL1_SMALL"                                                           # Default value
  build_image               = "aws/codebuild/docker:17.09.0"                                                   # Default value
  build_privileged_override = "true"                                                                           # Default value
  test_buildspec            = "buildspec_test.yml"                                                             # Default value
  package_buildspec         = "buildspec.yml"                                                                  # Default value
  force_artifact_destroy    = "true"                                                                           # Default value
}
Enter fullscreen mode Exit fullscreen mode

At the top we see we have declared a variable, image_name, that will be passed into the template. Next, we see that we create our codecommit-cicd module. This is slightly different than what we have seen in the past.

  1. First, the build_image property is set to aws/codebuild/docker:17.09.0. This is the AWS provided CodeBuild image that allows us to build our own Docker images.
  2. Second, the build_privileged_override property is new. This property tells CodeBuild that we are going to be building Docker images, so grant us access to it.

Those are the only two things we need to change about our CI/CD Pipeline in order to support building Docker images in AWS CodeBuild. Let's look at the next two resources defined below these.

resource "aws_ecr_repository" "image_repository" {
  name = "${var.image_name}"
}

resource "aws_iam_role_policy" "codebuild_policy" {
  name = "serverless-codebuild-automation-policy"
  role = "${module.codecommit-cicd.codebuild_role_name}"

  policy = <<POLICY
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Action": [
        "ecr:BatchCheckLayerAvailability",
        "ecr:CompleteLayerUpload",
        "ecr:GetAuthorizationToken",
        "ecr:InitiateLayerUpload",
        "ecr:PutImage",
        "ecr:UploadLayerPart"
      ],
      "Resource": "*",
      "Effect": "Allow"
    }
  ]
}
POLICY
}
Enter fullscreen mode Exit fullscreen mode

We begin by defining our AWS Elastic Container Registry (ECR). This is a fully managed Docker container registry inside of our AWS account. We can store, manage, and deploy our container images using ECR. Notice here we use the image_name variable that was passed into our template for the name of our ECR repository.

The final piece we see here is an additional IAM policy that is being attached to the role our CodeBuild project assumes. This policy is granting permission to our CodeBuild project to push images to our image repository.

Now that we what resources are going to be created, let's go ahead and actually create them using Terraform.

To get started, we initialize our providers and our template with the init command.

deployment-pipeline$ terraform init
Initializing modules...
- module.codecommit-cicd
- module.codecommit-cicd.unique_label

Initializing provider plugins...
Enter fullscreen mode Exit fullscreen mode

Once our template is initialized we can run a quick plan command to confirm all of the resources that are going to be created.

deployment-pipeline$ terraform plan -var image_name=sample-express-app
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  + aws_ecr_repository.image_repository

....
......
........

Plan: 13 to add, 0 to change, 0 to destroy.

------------------------------------------------------------------------

Enter fullscreen mode Exit fullscreen mode

We that 13 resources are going to be created. Let's go ahead and run our apply command to create all of these in our AWS account.

deployment-pipeline$ terraform apply -auto-approve -var image_name=sample-express-app
data.aws_iam_policy_document.codepipeline_assume_policy: Refreshing state...
module.codecommit-cicd.module.unique_label.null_resource.default: Creating...

....
......
........

module.codecommit-cicd.aws_iam_role_policy.codebuild_policy: Creation complete after 1s (ID: docker-image-build-codebuild-role:docker-image-build-codebuild-policy)
module.codecommit-cicd.aws_codepipeline.codepipeline: Creation complete after 1s (ID: docker-image-build)

Apply complete! Resources: 13 added, 0 changed, 0 destroyed.

Outputs:

codebuild_role = arn:aws:iam::<account-id>:role/docker-image-build-codebuild-role
codepipeline_role = arn:aws:iam::<account-id>:role/docker-image-build-codepipeline-role
ecr_image_respository_url = <account-id>.dkr.ecr.us-west-2.amazonaws.com/sample-express-app
repo_url = https://git-codecommit.us-west-2.amazonaws.com/v1/repos/docker-image-build

Enter fullscreen mode Exit fullscreen mode

We see that 13 resources have been created and that our Git repo url, as well as our ECR repo url, has been outputted. Copy the ECR url somewhere for the time being as well will need it once we need to configure the buildspec.yml file CodeBuild is going to use.

Let's do a quick overview of the Docker image we are going to build and push to our new ECR repository.

Our sample application and Docker image

For our demo, I have created a GitHub repository that has a sample Express API configured. In it, we see our api.js file that contains our application logic.

const express = require('express');

// Constants
const PORT = 8080;
const HOST = '0.0.0.0';

const app = express();
app.get('/health', (req, res) => {
  res.send('The API is healthy, thanks for checking!\n');
});

app.listen(PORT, HOST);
console.log(`Running API on port ${PORT}`);

Enter fullscreen mode Exit fullscreen mode

This isn't doing anything magical but it is perfect for demonstrating our Docker image construction. We are setting up express to listen on port 8080 and setting up a route, /health, to return a simple response.

To go with our sample application we also have a sample Dockerfile.

FROM node:8
WORKDIR /src/app

# Install app dependencies
COPY package*.json ./
RUN npm install

# Copy app contents
COPY . .

# App runs on port 8080
EXPOSE 8080

# Start the app
CMD [ "npm", "start"]
Enter fullscreen mode Exit fullscreen mode

A quick rundown of what our Dockerfile is doing here.

  • FROM specifies the base image our image is going to be built from. In our case, we are using a Node 8 image that is coming from Docker Hub.
  • WORKDIR is setting our working directory for any commands that appear after.
  • COPY is just doing a copy of our package.json files to our working directory.
  • RUN is used for running commands, here we are running npm install.
  • EXPOSE is telling Docker that our container plans to listen on port 8080.
  • CMD is specifying the default behavior for our container. In our case, we are calling a script, start, inside of our package.json that is then starting our Express server in api.js.

See not to bad right? There is a lot of things you can configure inside of a Dockerfile. This is fantastic for getting your images just right and allows your containers to launch and do what they need to do, no further configuration necessary.

Building our Docker image inside of our CI/CD Pipeline

We have our underlying AWS resources for our CI/CD Pipeline provisioned. We have a sample application that has a Dockerfile associated with it. Now all that is left is building our Docker image inside of our deployment pipeline in AWS.

The final thing we need to do in order to start building our Docker image inside of AWS CodePipeline and CodeBuild is to configure our buildspec.yml file.

Again, looking at our sample repository we see that our buildspec.yml file is at the root of our repo. Taking a look at it we see the following commands.

version: 0.2
phases:
  install:
    commands:
      - echo install step...
  pre_build:
    commands:
      - echo logging in to AWS ECR...
      - $(aws ecr get-login --no-include-email --region us-west-2)
  build:
    commands:
      - echo build Docker image on `date`
      - cd src
      - docker build -t sample-express-app:latest .
      - docker tag sample-express-app:latest <your-ecr-url>/sample-express-app:latest
  post_build:
    commands:
      - echo build Docker image complete `date`
      - echo push latest Docker images to ECR...
      - docker push <your-ecr-url>/sample-express-app:latest
Enter fullscreen mode Exit fullscreen mode

In the pre_build step we are issuing a get-login call to ECR via the AWS CLI. The result of this call is being immediately executed, but for reference here is what this call is returning.

docker login -u AWS -p <complex-password> https://<AWS-accound-id>.dkr.ecr.us-west-2.amazonaws.com
Enter fullscreen mode Exit fullscreen mode

The call is returning a Docker login command in order to access our ECR repository.

Next, in the build command we are running docker build from within our src directory because that is where our Dockerfile is located. The build command is going to build an image from that file and tag it with sample-express-app:latest.

We then take that tagged source image and add a tagged target image which uses our ECR repository url.

With all of that done, we run a docker push command to push our target image to the ECR repository.

Cool right? Now with every commit to master in our repository our CI/CD Pipeline is triggered. Our build process can then take our code and Dockerfile to produce a new container image that is pushed directly to our private image repository in ECR.

Testing our plumbing

We got our infrastructure stood up in AWS. When a new commit comes in on master a new container image is built off of our Dockerfile. We push that new image directly to our private image repository in ECR.

Testing is straightforward. We can just pull the latest image from our ECR repository.

kyleg$ $(aws ecr get-login --no-include-email --region us-west-2)
Login succeeded
kyleg$ docker pull <your-ECR-url>/sample-express-app:latest
latest: Pulling from sample-express-app
kyleg$ docker run -p 8080:8080 -i <your-ECR-url>/sample-express-app:latest
> docker_app_express@1.0.0 start /src/app
> node api.js

Running API on port 8080
Enter fullscreen mode Exit fullscreen mode

Now can open up localhost:8080/health in our browser or run a cURL request on our command line.

kyleg$ curl localhost:8080/health
The API is healthy, thanks for checking!
Enter fullscreen mode Exit fullscreen mode

With that, we have successfully used our ECR image to create a container that we can run locally.

Conclusion

In this post, we have dived into how we can create a CI/CD Pipeline in AWS in order to continuously build Docker images for our sample application. We also demonstrated that we can publish those images to our own private image repository using Elastic Container Registry.

With just a few small tweaks to our Terraform module, we were able to stand up this pipeline in just a few minutes. With the basics of Docker in our belt, we can start building more sophisticated images.

We could explore how to push those images to a public repository like DockerHub. Or maybe how to deploy containers using those images with EKS or ECS. The possibilities are almost endless.

If you have any questions relating to this post, please just drop a comment below and I'll be happy to help out.

Are you hungry to learn more about Amazon Web Services?

Want to learn more about AWS? I recently released an e-book and video course that cuts through the sea of information. It focuses on hosting, securing, and deploying static websites on AWS. The goal is to learn services related to this problem as you are using them. If you have been wanting to learn AWS, but you’re not sure where to start, then check out my course.

Top comments (3)

Collapse
 
hemixam profile image
Maxime Hilaire • Edited

Thanks for this article.

I had to add the runtime versions though. I was getting this error: Phase context status code: YAML_FILE_ERROR Message: This build image requires selecting at least one runtime version.
docs.aws.amazon.com/codebuild/late...

This is the current list of supported runtimes: docs.aws.amazon.com/codebuild/late...

I added this snippet and it worked
phases:
install:
runtime-versions:
docker: 18

Collapse
 
david_j_eddy profile image
David J Eddy

My favorite three topics in one! AWS, Automation, and CI/D! :D

Collapse
 
kylegalbraith profile image
Kyle Galbraith

Thank you for the comment, David! I am glad you enjoyed it.