DEV Community

Dave
Dave

Posted on

Building a CI/CD Pipeline for Shoelace

AWS Beanstalk and a CD Pipeline

Over the past few months at Shoelace, we’ve slowly moved our cloud-hosted infrastructure from DigitalOcean to Amazon Web Services. While DigitalOcean served us well for a long time, we wanted a more feature-rich service that would simplify automated deployments and load balancing. Although there’s a lot of options from big-name companies for cloud-hosting, Amazon was the easy choice for us given our experience with the service in other areas such as S3, and my past experience with the platform.

Problem space

We had a decent sized fleet of servers (droplets) hosted on Digital Ocean, each with our latest production code checked out and running in a PM2 managed instance of NodeJS. That meant, whenever we wanted to deploy a new version to production, we would have to manually SSH into six (!) different servers, pull the latest from Git, and then rebuild via Grunt and reload Node via PM2. While not a difficult or unmanageable amount of work, it definitely became tedious and wasteful when we would want to go to production multiple times per day. We didn't want to be held back by an archaic deployment process, so we desired something better.

Goal

Primarily, we wanted to reduce the amount of work required to deploy. We wanted to eliminate any obstacle in the way of deploying easily, quickly and safely. Achieving that would allow us to be able to iterate faster, giving us great agility and response to market trends, API changes, and bug resolution.

I had some prior experience with AWS, which influenced our decision as a company to move onto the platform (plus, some free credits afforded to us given our status as a startup didn't hurt). Knowing the flexibility that AWS Beanstalk granted out of the box, we also wanted to add load balancing with minimal cost - something that Digital Ocean didn't support at the time. We would also be able to scale our instances up and down according to traffic with AWS auto-scaling policies. So, with lots of benefits available out of the box, we decided to move forward.

Approach

I first started by Dockerizing all of our projects. Docker in production afforded us peace of mind that the versions would be identical across each environment. Since our stack is built on NodeJS, I used a base image from Node on Docker Hub. Our Docker images are pretty simple, as their main job is to take a snapshot of the code base at the time the image is created. We use Docker run to execute npm install, grunt build, and then we use PM2 Docker as our default command.

From there I manually set up AWS Beanstalk environments. I enabled load balancing and autoscaling with default triggers, and NAT gateway to filter outbound traffic through one IP address to simplify our interactions with the Facebook API, who uses a whitelist to validate requests. Instead of having to update the whitelist every time a new EC2 instance is spun up under our load balancer, the NAT gateway ensure a single IP address is sending requests to Facebook. I chose immutable deployments across our EC2 instances, to eliminate the chance of users interacting with a partially rolled out deployment.

Next up I wrote a custom deploy script to tie together the continuous deployment flow. The script has a few main components to it.

First, it builds the Docker image against the current directory, and pushes that image out to AWS Elastic Container Registry. I could have also used Docker Hub here, but given the rest of our cloud infrastructure was already on Amazon, keeping everything under the Amazon umbrella seemed simpler.

Next, a ZIP file is created to store a Dockerrun.aws.json file, which Beanstalk uses to configure applications in an Elastic Beanstalk Docker environment, and some low-level Beanstalk configuration files to ensure that our network configuration is maintained across environment updates. The Dockerrun file is how Amazon will later know which Docker image to pull, in order to update our environment. This ZIP file is pushed to S3.

Afterwards, a new Beanstalk version is created based on the ZIP file we just pushed to S3, and then we trigger an environment update using the newly created application version.

To tie this all together, and to give our continuous integration and testing piece, we integrated with CircleCI. Once our projects were added, I updated our CircleCI configuration to execute our deployment script to AWS once our production branch on Github successfully built (re: our linter and automated tests passed).

And that's it! Now, whenever we cut a production branch on Github, our code is automatically tested an deployed to AWS. No more manual deployments.

Top comments (0)