DEV Community

James Eastham
James Eastham

Posted on

CI/CD in GoLang with Azure Dev Ops and AWS Elastic Beanstalk

This post follows a series of articles I've been writing about designing a distributed system. I'd highly recommend at least ready part 1 before this one just to get a little bit of context.

Now that the team-service application is as feature-rich as it needs to be for the moment, it feels like a great time to get it running somewhere more useful to the world. http://localhost:8000 is fantastic for me, not so great for anybody else who wants to use the app.

With the tooling we have available as developers today, I see no reason at all that software should ever be manually deployed.

For that reason, the team-service application is going to be deployed using CI/CD pipelines that require 0 user interaction. All releases should be triggered using a developer-friendly interaction (git commit, pull request etc).

There are a couple of different tools I'm going to be using to achieve this:

  1. Azure Dev Ops Services - Azure Dev Ops is by far my favourite build and release pipeline provider. It can also include built-in git repos for those private projects you want to keep away from prying eyes.
  2. AWS Elastic Beanstalk - AWS Elastic Beanstalk is a fantastic offering for quickly getting source code running in a public domain.

Some of you may be questioning why I'm using a Microsoft provided CI/CD service with Amazon provided infrastructure.

My biggest reason is the simplicity of using build pipelines in Azure Dev Ops. I've experimented with AWS Code Build and don't find the interface anywhere near as intuitive.

Deploying to AWS from Azure Dev Ops is extremely easy with the install of an extension from the marketplace.

There is no clear cut reason why I prefer AWS over Azure for app hosting. They are both fantastic cloud providers.

I work with AWS a lot more in my day to day work, and I find their costs for playing around to be a lot more reasonable. Hence the reasoning for pushing to AWS.

Build Pipelines in Azure Dev Ops

Build pipelines in Azure Dev Ops use yaml. You can either manually type out the yaml files, are use the editor within the UI (which is fantastic by the way).

To get started though, I need to link a new project to my GitHub repo.

When creating the new project, it's important to create it as public. You can only link to a public GitHub repo from a public Azure Dev Ops project.

Alt Text

The Dev Ops UI makes it extremely easy to link up a GitHub project. On creation of a new pipeline, just step through the wizard to create a link to the GitHub repo.

For the moment, I'm going to create a new pipeline and save the standard layout. It'll look something like this.

# Starter pipeline
# Start with a minimal pipeline that you can customize to build and deploy your code.
# Add steps that build, run tests, deploy, and more:
# https://aka.ms/yaml

trigger:
- master

pool:
  vmImage: 'ubuntu-latest'

steps:
- script: echo Hello, world!
  displayName: 'Run a one-line script'

- script: |
    echo Add other tasks to build, test, and deploy your project.
    echo See https://aka.ms/yaml
  displayName: 'Run a multi-line script'

Enter fullscreen mode Exit fullscreen mode

This will commit a new file named azure-pipelines.yml to my git repo. Before I go any further, I'm going to do a little bit of housekeeping and move that file into the src folder for the team-service.

Manually starting a build gives the following output.

Alt Text

Fantastic, we have a working build pipeline.

Required deployment steps

Before adding any steps to the pipeline, it's important to think about exactly what the steps would be to deploy the code. In this instance:

  1. Run unit tests
  2. Package application in a runnable format (Zip file containing Dockerfile)
  3. Deploy to Elastic beanstalk environment

Simple, right.

The Build Pipeline

As much as I love writing code, the build pipeline UI really is too good to not use.

Alt Text

You can search for the required component on the right-hand side using the assistant, fill in the variables and add it to the yaml. Once finished, saving the pipeline commits it directly back to the Git repo. Magical!

An important note at this point, I had already installed the AWS Azure Dev Ops Extensions. It's a fantastic set of tools that makes deploying to AWS a piece of cake.

So, back to that pesky YAML file. Here it is in its entirety, you'll find some explanation after the code block.

# 1. Triggers
trigger:
  branches:
    include:
    - master
    - releases/*
  paths:
    include:
    - src/team-service/*

# 2. Jobs
jobs:
- job: run_unit_tests
  displayName: 'Run Unit Test'
  pool:
    vmImage: 'ubuntu-latest'
# 3. Steps
  steps:
  # Run domain level tests
  - task: Go@0
    inputs:
      command: 'test'
      workingDirectory: 'src/team-service/domain'
  # Run use case level tests
  - task: Go@0
    inputs:
      command: 'test'
      workingDirectory: 'src/team-service/usecases'
  # Ensure application can build
  - task: Go@0
    inputs:
      command: 'build'
      workingDirectory: 'src/team-service/'


- job: package_application_files
  displayName: 'Package application files'
# 4. Dependent jobs
  dependsOn: [ run_unit_tests ]
  condition: and(succeeded(), startsWith(variables['Build.SourceBranch'], 'refs/heads/release/'))
  pool:
    vmImage: 'ubuntu-latest'
  steps:
  # Create zip file of team-service
  - task: ArchiveFiles@2
    inputs:
      rootFolderOrFile: 'src/team-service'
      includeRootFolder: false
      archiveType: 'zip'
      archiveFile: '$(Build.ArtifactStagingDirectory)/app.zip'
      replaceExistingArchive: true
  - task: PublishBuildArtifacts@1
    inputs:
      PathtoPublish: '$(Build.ArtifactStagingDirectory)'
      ArtifactName: 'drop'
      publishLocation: 'Container'
      replaceExistingArchive: true

Enter fullscreen mode Exit fullscreen mode

I've added four numbered points within the YAML for ease of explanation.

1. Triggers

Triggers allow control of when the pipeline should be triggered. In this case, I'm going to run it on any changes to master, or to any changes to a branch that begins with release/.

The pipeline will also only run if the changes are made to any files under the src folder. This stops un-necessary builds when non-relevant changes are made (documentation, other services etc).

2. Jobs

Jobs are a way of organizing a pipeline into logical steps. Each pipeline will have at least one job, and each job can have any number of steps.

In this case, there are two jobs. One to run the unit tests and one to package up and deploy the application files.

3. Steps

Steps make up the meat of the pipeline functionality. They are the individual tasks that will happen, one by one, against your source code. In this case, we:

  • Run the domain level unit tests
  • Run the usecase level unit tests
  • Ensure the entire go application can build

  • ZIP the application files inc. Dockerfile

  • Publish the build artifact

No mention of AWS just yet, more on that in just a second.

4. Dependent jobs

One of the most powerful features of the pipelines is controlling which jobs run conditionally.

Here, I'm only running the second job if the source branch begins with release and if the run_unit_tests job completes successfully.

And there we have it, 53 lines of code to test, build and package a Go application. Now for the deployment part:

Deploy from Azure Dev Ops to AWS

Azure Dev Ops splits it's pipelines functionality into two seperate distinct parts.

  • Builds The generation of 'artifacts' containing packaged versions of the source code
  • Release Uses build artifacts and deploys them

In this section, I'll run through the extremely simple setup of a release pipeline to push to AWS. First though, some quick AWS admin.

Create an Elastic Beanstalk Application

For the release to work, we first need an elastic beanstalk application and environment to deploy to.

For that, log in to the AWS management console and head over to the elastic beanstalk section.

From there, create a new application with a descriptive name.

Once within the created application, you will want to provision a new environment. I tend to run with two separate environments for most applications; dev and prod.

For this application here, I'm going to create a new dev environment using Docker as the runtime. For the time being, I also want to create the environment using sample application code.

Alt Text

Following the same steps to create another environment named production, after a few minutes I end up with the following in the Elastic Beanstalk application interface.

Alt Text

One final AWS console step, create a publically accessible Amazon S3 bucket. For that, head off to the S3 interface and create a new bucket. Ensuring the block public access options are disabled.

Azure Dev Ops Release Pipeline

There are a million and one different ways to manage release pipelines. Push direct to production. Push all builds to development but only certain builds to production. The list is endless.

I'm going to start with the following flow:

  1. The developer creates a new branch from a master named release/
  2. Build pipeline triggered from the release branch
  3. Release pipelines push to development, and waits for a specified number of minutes before pushing to production.

This gives the opportunity for a quick test of the development environment before the change gets automatically sent to production.

I also considered having an approval gate to allow a specific user to approve the release, but given I'm the only developer right now that seemed useless.

I'm going to head into the Releases section of the Azure Dev Ops UI and create a new release pipeline. Initially, I want to configure from an empty pipeline.

All release pipelines need to be linked up to a build artifact, for that, I'm going to add a new artifact and link it to the build pipeline created earlier.

I then need 3 distinct steps:

  • Push to development
  • Wait for X minutes
  • Push to production

Luckily, all that is nice and simple.

The two push tasks are almost identical, so here is the detail from the development one:

Alt Text

First, I take the ZIP file generated from the build pipeline and upload that to the S3 bucket created at the start of this section.

For the moment, I standardize the zip file to always be named app.zip, but in the future, I would like to upload a zip file named with the release number.

From there, I create a new Elastic Beanstalk deployment using the zip file uploaded to S3.

Elastic beanstalk is smart enough to know, because I set the platform to be Docker, that it needs to unpack the ZIP file and look for a Dockerfile. Using that Dockerfile, EB can generate an image and run it right away.

Cloud computing... it's a magical place.

Alt Text

There is my complete release pipeline. A push to development, a nice simple wait task and then a push to production.

Summary

There we have it, a complete end to end deployment pipeline from development machine through to a live application running in AWS.

As a test, I've run the following commands from my local machine.

git checkout -b release/1.0.0
git push --set-upstream origin release/1.0.0
Enter fullscreen mode Exit fullscreen mode

Sure enough, my build pipeline runs to completion before starting my release pipeline.

A few minutes later, the application is running in production.

Alt Text

I've said it once, and I'll say it again, cloud computing is truly a magical place.

This is by no means a production-ready guide right now, the main reason being that both development and production are running using the same underlying infrastructure (Dynamo DB and SQS queues).

Application configuration, now there's a conversation for another day.

As always, feedback is greatly appreciated.

Top comments (0)