DEV Community

Cover image for CI/CD as a Service
Dan Dobrescu for Studyportals

Posted on

CI/CD as a Service

Abstract

Continuous Deployment has transformed how developers introduce changes, placing emphasis on automation, reliability, and speed. As applications grow, the pipelines that transition code from development to production must also evolve. The objective is to offer quicker feedback through shortened deployment cycles. These pipelines should be adaptable, reflecting the needs of the software they handle. It's vital to incorporate rigorous quality gates at key intervals, making certain that agility doesn't sacrifice quality.

This article dives deep into the synergy of AWS and its Cloud Development Kit (CDK), illustrating how these tools can help DevOps engineers craft CI/CD pipelines that are not only efficient but also agile and responsive to the ever-evolving demands of the software.

About Us

At Studyportals, we started using serverless automation to create environments and CI/CD pipelines from AWS Lambda’s early stages. As we started transitioning towards a microservice architecture, the complexity of a deployment also increased and, without constant maintenance, the deployment tools that we were using proved to be increasingly difficult to extend and maintain. Our engineering teams relied mostly on pipelines that were manually created for each service. This approach, while initially serving our needs, started to manifest some challenges. One notable struggle was devising a robust and uniform notification system to centralize all the deployment notifications in the same place and use a similar format. Additionally, pushing updates to the pipelines and to the resources backing up the pipelines across multiple accounts and regions posed another hurdle.

Our architecture is distinctly categorized into three primary types of services:

  • Docker (Monoliths): These are our traditional monolithic applications that encapsulate various functionalities within a single service boundary, containerized using Docker for consistent deployment and scaling.

  • Frontend Microservices: These microservices focus on the user interface and experience. Recently, we've standardized them by transitioning to Vue 3, ensuring a cohesive development approach. Each of these web applications is proxied via CloudFront, enhancing content delivery speed and security.

  • Backend Microservices: These handle business logic, data processing, and backend operations. Developed with a serverless architecture, they offer a modular and scalable approach.

Charting new territories

Our objective was to devise a CI/CD service that could proficiently manage each of our three distinct service types. The goal extended beyond merely ensuring smooth continuous integration and deployment; we also aimed to oversee the comprehensive creation of all requisite infrastructure for deploying the services. Additionally, we prioritized deploying production and non-production environments on distinct AWS accounts for the purpose of security and compliance audits, ensuring they remained isolated from one another. This approach not only bolstered security, but also offered enhanced separation and the ability to fully manage the service’s containment.

Building robust infrastructure can be complex, and dynamically generating CloudFormation scripts in JSON/YAML format can be a real headache, even when using frameworks like Serverless or SAM. But with AWS CDK (Cloud Development Kit) we tap into the flexibility of TypeScript, which is a game-changer. The basic building blocks of AWS CDK are called constructs, which map to one or more AWS resources, and can be composed of other constructs. This resembles the Composite Design Pattern, and just like in a Lego structure where individual pieces can be snapped together to form larger sections, in the AWS CDK, smaller constructs can be grouped together to create larger, more complex constructs. These groupings can further be connected to shape an even larger structure

Why AWS CDK stands out for us:

  • Reusable Components: Crafting infrastructure chunks that can be used again and again, making our setup tasks way more efficient.

  • Team Collaboration: Sharing these setups is a breeze, ensuring our teams can build consistently without reinventing the wheel.

  • High-Level Constructs: We can set up anything from security groups to ElasticBeanstalk environments. They’re designed to complete common tasks in AWS, often involving multiple types of resources.

  • Agility and Adaptability: The flexibility AWS CDK offers ensures we adapt swiftly to changes, keeping our infrastructure and applications in harmony.

First things first was to determine how are we going to trigger the creation of the infrastructure. Two solid paths lie ahead: GitHub Webhooks and GitHub Workflows.

Webhooks are straightforward - they ping our system with a notification when specific GitHub events happen. It's efficient but requires us to have a centralized system ready and waiting to handle those pings and translate them into action.

On the other hand, GitHub Workflows sit right in the GitHub space, allowing us to script responses to those same events without leaving the environment. This means that when a change is detected, GitHub Actions can prepare and execute the CloudFormation templates we’ve brewed up with AWS CDK, all in one smooth motion.

Why GitHub Actions caught our eye:

  • Integrated: It's in GitHub, keeping things tidy and contained.

  • Control: It lets us dictate the what, when, and how of action triggers.

  • Efficient: Less platform-hopping for the team equals more streamlined workflows.

Setting up infrastructure and CI/CD pipelines for various services and environments often involves dealing with overlapping components. The challenge here is finding a way to keep things DRY (Don't Repeat Yourself) and efficient, without tying our stacks up in knots with dependencies they don’t need. Each stack needs to be its own island - able to function alone, but also be part of a bigger archipelago of services, sharing resources smartly where it makes sense, and avoiding unnecessary duplication.

This means we need a setup that’s lean, avoiding unnecessary duplication, and mindful of AWS quotas, while also ensuring our stacks can go solo on any AWS account without tripping over each other.

The obvious choice was to split things in two main stacks for each service - create an environment stack holding the communal resources (e.g. CodeBuild instances, S3 buckets, CloudFront policies, etc. ) and a service stack with the particular resources that an individual service needs to operate. This way, everything common is in one place, and everything specific has its spot, which interestingly, is a model AWS Proton also follows with its environment and service stacks. It was nice seeing AWS Proton, a fully managed application deployment service, coming out with a similar approach after we'd adopted this strategy. While we’ve set a solid course with our current setup months before AWS Proton was released, exploring an integration with AWS Proton is something we’re keeping an eye on for the road ahead.

This is how it would look like in big lines:

Big Lines Architecture

As we dug deeper into the CI/CD architecture, it became clear that we needed to pay close attention to our branching strategies. This wasn’t just a side note - it was crucial to shaping our entire CI/CD setup. We needed to be nimble and support two main branching strategies:

  • Git Flow variation, used mostly by projects that require a permanent testing environment

  • Trunk Based Development, used by small projects and microservices with a lower complexity level

Based on the branching strategy, we managed to identify the points where the CI/CD should run and spin up environments:

  • PR environment, when creating a PR from the feature branch to the default branch

  • Testing environment, when pushing to the default branch

  • RC environment, when creating a PR from the develop branch to the release branch

  • Production environment, when pushing to the main branch.

* for trunk-based development strategies, the Testing environment wouldn’t exist and the PR environment would coincide with the RC environment

Putting it all together

Piecing all of this together was not a walk in the park, to say the least. While on paper everything might seem linear, our real-world implementation brought us face to face with challenges we hadn't quite anticipated. The most notable among these was the intricate relationship between the environment stack and the service stack.

Using AWS CDK proved powerful, but it did have its challenges. One issue we bumped into was the occasional intent of CDK to recreate a resource in the parent (environment) stack that was still in use by the child (service) stack. This isn't a new issue, as highlighted by Dependency stacks cause update failure. There are strategies to get past these cross stack references, usually requiring manual steps. However, we aimed for full automation. Our workaround? We leaned on sharing data between stacks using the SSM Parameter Store. This approach was particularly helpful for those situations where we foresaw potential issues, enabling us to update a resource in the parent stack without disturbing its relationship in the child stack.

Another struggle was with specific account limits, such as the number of CloudFront Custom Policies or the number IAM roles, which meant that we had to introduce account level support stacks in addition to our environment and service stacks. These account-level stacks housed resources that could be utilized by multiple services in the same account, and were designed to be long-lasting stacks.

With the groundwork laid out, let's pull the curtain on our service blueprint:

In Depth Architecture

For every service, we established a service descriptor file (cicd.json) that’s versioned in GitHub along with the rest of the project’s source code. This helped us tailor the stacks we deploy for each specific service’s requirements.
The environments' branches will match the branches on which the GitHub workflow listens for events, and based on the event type (pull_request or push) we choose whether to create a PR/RC or a Testing/Production environment.

Code Workflow and Descriptor

The environments from the descriptor extend an out of the box feature of CDK, also called Environments. It gave us a ton of flexibility: we could deploy our setup to any AWS account or region we wanted, making sure our deployment was as flexible and scalable as the services we were working on.

With every GitHub event, our system responds promptly. Based on the configurations we've established with the CDK's Environments feature, our entire setup – including the environment and its associated pipelines – either undergoes creation or receives updates. This ensures we're always in sync with the latest code changes. After these changes are completed, the AWS CodePipeline springs into action. It retrieves the updated code from git, then builds, tests, and deploys it to the intended environment, likely followed by additional testing. By doing so, we ensure a high standard of quality while maintaining the adaptability and efficiency of our deployment method.

The road ahead

Continuous Deployment, with its emphasis on speed, reliability, and automation, has revolutionized the way developers operate today. While setting up our infrastructure and improving our CI/CD pipeline, we realized the benefits of the AWS ecosystem and the versatility of the Cloud Development Kit (CDK). The journey with Studyportals, adapting serverless automation, refining branching strategies, and accommodating the dynamic nature of microservice architecture, shed light on the complexities and challenges inherent in the transition process. Yet, it was this very journey that equipped us with the tools and strategies to overcome these challenges and streamline our operations.

AWS CDK stood out as an invaluable tool in this process, offering modularity, reusability, and agility, which ultimately enabled us to keep our infrastructure and applications seamlessly aligned. However, it's crucial to remember that, like any tool, CDK has its own set of challenges, necessitating innovation and strategic workarounds to fully harness its potential.

Going forward, we remain committed to evolving and adapting. Technologies will continue to change, and new challenges will emerge. But with a robust framework in place, a commitment to continuous learning, and an emphasis on adaptability, we are confident in our ability to navigate the future landscape of CI/CD.

Top comments (0)