DEV Community

Cover image for Continuous Integration and Delivery to AWS Kubernetes
Tomas Fernandez for Semaphore

Posted on • Updated on • Originally published at dzone.com

Continuous Integration and Delivery to AWS Kubernetes

Semaphore gives you the power to easily create continuous integration and delivery (CI/CD) workflows to build, test, and deploy applications. In this article, we’ll learn how to combine Semaphore with AWS to run a microservice on Kubernetes in just a few minutes.

What We Are Building

We’re going to setup CI/CD pipelines to automate builds and deployments. By the end of the article, our pipeline will be able to:

  • Install project dependencies.
  • Run unit tests.
  • Build and tag a Docker image.
  • Push the Docker image to Amazon Container Registry (ECR).
  • Provide a one-click deployment to Amazon Kubernetes Service (EKS).

As a starting point, we have a Ruby Sinatra microservice that exposes a few HTTP endpoints. We’ll go step by step, doing some modifications along the way to make it work in the Amazon ecosystem.

What You’ll Need

First things first, go to the demo repository, fork it, and clone it to your workstation.

GitHub logo semaphoreci-demos / semaphore-demo-ruby-kubernetes

A Semaphore demo CI/CD pipeline for Kubernetes.

Semaphore CI/CD demo for Kubernetes

Build Status

This is an example application and CI/CD pipeline showing how to build, test and deploy a microservice to Kubernetes using Semaphore 2.0.

Ingredients:

  • Ruby Sinatra as web framework
  • RSpec for tests
  • Packaged in a Docker container
  • Container pushed to Docker Hub registry
  • Deployed to Kubernetes

CI/CD on Semaphore

If you're new to Semaphore, feel free to fork this repository and use it to create a project.

The CI/CD pipeline is defined in .semaphore directory and looks like this:

CI/CD pipeline on Semaphore

Local application setup

To run the microservice:

bundle install --path vendor/bundle
bundle exec rackup

To run tests:

bundle exec rspec

To build and run Docker container:

docker build -t semaphore-demo-ruby-kubernetes
docker run -p 80:4567 semaphore-demo-ruby-kubernetes
curl localhost
> hello world :))

Additional documentation

License

Copyright (c) 2022 Rendered Text

Distributed under the MIT License…

In AWS, we’ll need Elastic Container Registry (ECR), Elastic Kubernetes Service (EKS) and a IAM user with programmatic access. I heartily recommend using eksctl to create the cluster.

If you need help with AWS, check out the full blow-by-blow, screenshot-by-screenshot account on the original unabridged tutorial.

And you should install some tools on your work machine as well:

With that out of the way, we’re ready to get started. Let’s begin!

Semaphore Setup

To automate the whole thing, we’re going to use Semaphore, a powerful, fast, and easy-to-use CI/CD platform.

Setting up Semaphore to work with our code is super easy:

  1. Login to your Semaphore account.
  2. Follow the link in the sidebar to create a new project.
  3. Semaphore will show your GitHub repositories, click on Add Repository.

We have some sample pipelines already included in our app; unfortunately, these weren't planned for AWS. No matter, we’ll make new ones. For now, let’s just delete the files we won’t need:

$ git rm .semaphore/docker-build.yml
$ git rm .semaphore/deploy.k8s.yml
$ git rm deployment.yml
$ git commit -m "first run on Semaphore"
$ git push origin master
Enter fullscreen mode Exit fullscreen mode

Connect AWS and Semaphore

We need to supply Semaphore with the access keys to our AWS account. Semaphore provides the secrets feature to store sensitive information securely.

We’ll need two secrets. The first one has the access tokens for the AWS IAM user:

$ sem create secret AWS \
    -e AWS_ACCESS_KEY_ID=«YOUR_ACCESS_KEY_ID» \
    -e AWS_SECRET_ACCESS_KEY=«YOUR_SECRET_ACCESS_KEY»
Enter fullscreen mode Exit fullscreen mode

The second secret has the kubeconfig file, which is needed to connect to the Kubernetes cluster. This file should have been created by eksctl during the initial creation of the cluster. The file is, by default, located at $HOME/.kube/config in your workstation. Upload that file to Semaphore:

$ sem create secret aws-k8s \
    -f $HOME/.kube/config:/home/semaphore/.kube/config
Enter fullscreen mode Exit fullscreen mode

Continuous Deployment with Semaphore

If you are curious about the difference between Continuous Integration and Continuous Delivery and how CI/CD pipelines work, I highly recommend reading “CI/CD Pipeline: A Gentle Introduction” for a lovely explanation of the concepts.

Our workflow has a total of three pipelines:

  1. Continuous Integration Pipeline: builds and tests the application. I won’t cover this one here. You can check the original post to learn about its intricacies.
  2. Build Pipeline: generates and uploads the Docker image to AWS ECR.
  3. Deploy Pipeline: starts the Kubernetes deployment.

The Build Pipeline

In this section, we’ll create the Docker build pipeline. Which looks like this:

Docker Build Pipeline

Create a new file .semaphore/docker-build.yml with the content of the next three code boxes.

Define a name and an agent; the agent is the machine type that powers the pipeline:

version: v1.0
name: Docker build
agent:
  machine:
    type: e1-standard-2
    os_image: ubuntu1804
Enter fullscreen mode Exit fullscreen mode

Next, the “Build” block, which has your AWS account details (which you’ll have to adjust), the secret that we created earlier, and a single “Docker build” job with build commands:

blocks:
  - name: Build
    task:
      env_vars:
        # Adjust as required
        - name: AWS_DEFAULT_REGION
          value: «YOUR_AWS_REGION (eg. "us-east-2")»
        - name: ECR_REGISTRY
          value: «YOUR_ECR_URI»
      secrets:
        - name: AWS
      jobs:
        - name: Docker build
          commands:
            - checkout
            - sudo pip install awscli
            - aws ecr get-login --no-include-email | bash
            - docker pull "${ECR_REGISTRY}:latest" || true
            - docker build --cache-from "${ECR_REGISTRY}:latest" -t "${ECR_REGISTRY}:${SEMAPHORE_WORKFLOW_ID}" .
            - docker images
            - docker push "${ECR_REGISTRY}:${SEMAPHORE_WORKFLOW_ID}"
Enter fullscreen mode Exit fullscreen mode

To connect this pipeline with the next one, add a promotion:

promotions:
  - name: Deploy to Kubernetes
    pipeline_file: deploy-k8s.yml
Enter fullscreen mode Exit fullscreen mode

The Deployment Manifest

Automatic deployment is Kubernetes' strong suit. All you need is a manifest with the resources you want online. Create a new file called deployment.yml with the content of the next two code boxes.

A LoadBalancer service to forward HTTP traffic:

apiVersion: v1
kind: Service
metadata:
  name: semaphore-demo-ruby-kubernetes-lb
spec:
  selector:
    app: semaphore-demo-ruby-kubernetes
  type: LoadBalancer
  ports:
    - port: 80
      targetPort: 4567
Enter fullscreen mode Exit fullscreen mode

And the microservice pod, which contains the reference to the Docker image. Append the following code to the manifest:


---

apiVersion: apps/v1
kind: Deployment
metadata:
  name: semaphore-demo-ruby-kubernetes
spec:
  replicas: 1
  selector:
    matchLabels:
      app: semaphore-demo-ruby-kubernetes
  template:
    metadata:
      labels:
        app: semaphore-demo-ruby-kubernetes
    spec:
      containers:
        - name: semaphore-demo-ruby-kubernetes
          image: "${ECR_REGISTRY}:${SEMAPHORE_WORKFLOW_ID}"
      imagePullSecrets:
        - name: aws-ecr
Enter fullscreen mode Exit fullscreen mode

Replace replicas: 1 with the number of nodes you have in your cluster.

The Deployment Pipeline

Time to automate the deployment. In this section, we’ll create the “Deploy to Kubernetes” pipeline:

Deployment Pipeline

Create a deployment pipeline at .semaphore/deploy-k8s.yml with the next 2 code boxes:

version: v1.0
name: Deploy to Kubernetes
agent:
  machine:
    type: e1-standard-2
    os_image: ubuntu1804
blocks:
  - name: Deploy to Kubernetes
    task:
      secrets:
        - name: aws-k8s
        - name: AWS
      env_vars:
        # Adjust environment values for your account
        - name: AWS_DEFAULT_REGION
          value: «YOUR_AWS_REGION»
        - name: ECR_REGISTRY
          value: «YOUR_ECR_URI»
      jobs:
        - name: Deploy
          commands:
            - checkout
            - mkdir -p ~/bin
            - curl -o ~/bin/aws-iam-authenticator https://amazon-eks.s3-us-west-2.amazonaws.com/1.12.7/2019-03-27/bin/linux/amd64/aws-iam-authenticator
            - chmod a+x ~/bin/aws-iam-authenticator
            - export PATH=~/bin:$PATH
            - sudo pip install awscli
            - export ECR_PASSWORD=$(aws ecr get-login --no-include-email | awk '{print $6}')
            - kubectl delete secret aws-ecr || true
            - kubectl create secret docker-registry aws-ecr --docker-server="https://$ECR_REGISTRY" --docker-username=AWS --docker-password="$ECR_PASSWORD"
            - kubectl get secret aws-ecr
            - cat deployment.yml | envsubst | tee deploy.yml
            - kubectl apply -f deploy.yml
Enter fullscreen mode Exit fullscreen mode

The job can be broken down in three parts:

  1. Install AWS tools.
  2. Create a secret in the Kubernetes cluster with the ECR token—so it may pull the image from the private repo.
  3. Prepare the deployment manifest and send it to the cluster.

The second block tags the current image as “latest” to indicate this is the image actually running on the cluster. The job retrieves the current image, slaps the “latest” tag, and pushes it again to the registry.

  - name: Tag latest release
    task:
      secrets:
        - name: AWS
      env_vars:
        # Adjust environment values for your account
        - name: AWS_DEFAULT_REGION
          value: <YOUR_AWS_REGION>
        - name: ECR_REGISTRY
          value: <YOUR_ECR_URI>
      jobs:
        - name: Docker tag latest
          commands:
            - sudo pip install awscli
            - aws ecr get-login --no-include-email | bash
            - docker pull "${ECR_REGISTRY}:$SEMAPHORE_WORKFLOW_ID"
            - docker tag "${ECR_REGISTRY}:$SEMAPHORE_WORKFLOW_ID" "${ECR_REGISTRY}:latest"
            - docker push "${ECR_REGISTRY}:latest"
Enter fullscreen mode Exit fullscreen mode

With the pipelines are defined, we’re ready to roll.

Deploy to Kubernetes

Let’s teach our Sinatra app to sing. Add the following code inside the App class in app.rb:

get "/sing" do
  "And now, the end is near
   And so I face the final curtain..."
end
Enter fullscreen mode Exit fullscreen mode
$ git add .semaphore/*
$ git add deployment.yml
$ git add app.rb
$ git commit -m "add deployment pipeline"
$ git push origin master
Enter fullscreen mode Exit fullscreen mode

Wait until the first two pipelines complete and hit the Promote button:

Full CI/CD Workflow

Allow a few seconds for the deployment to take place. You can monitor the progress with:

$ kubectl get pods
$ kubectl get deployments
$ kubectl get service
Enter fullscreen mode Exit fullscreen mode

Once complete, check the external IP address that was assigned to your LoadBalancer service and try the HTTP endpoint.

$ curl -w "\n" <YOUR_CLUSTER_EXTERNAL_URL>/sing
And now, the end is near
     And so I face the final curtain...
Enter fullscreen mode Exit fullscreen mode

One more time, Ol' Blue Eyes, sings for us.

The Final Curtain

Congratulations! You now have a fully automated continuous delivery pipeline to Kubernetes.

Feel free to fork the semaphore-demo-ruby-kubernetes repository and create a Semaphore project to deploy it on your Kubernetes instance. Here are some potential changes you can try:

  • Create a staging cluster.
  • Build a development container and run tests inside it.
  • Extend the project with more microservices.

Did you find the post useful? Let me know by ❤️-ing or leaving a comment below.

Interested in CI/CD and Kubernetes? We’re working on a free ebook, sign up to receive it as soon as it’s published.

Thanks for reading!

Top comments (0)