loading...
StackPulse

⁉ Why I started developing 💡 my new software project by building a 🚀 Continuous Deployment 🔃 pipeline

lbelkind profile image Leonid Belkind Updated on ・7 min read

Somehow, in recent years, when discussing software delivery pipelines in modern companies, people would casually drop the term CI / CD (which means Continuous Integration and Continuous Delivery / Deployment) always combining both practices. In many cases it is implied that the two practices always go together, or, maybe Continuous Delivery follows Continuous Integration as an optional step.

During recent years I had an opportunity to oversee the establishment of a software delivery pipeline twice, implementing only Continuous Integration first time, and starting with Continuous Delivery in my latest project.

🙈 TL;DR / Conclusion

There are HUGE differences between the two practices. Differences that don't just have impact on the technology used, but also on the culture of the engineering team. Implementing Continuous Integration only and hoping that it is a sufficient foundation to switch to Continuous Delivery in the future is a myth.

👊 Which is which and why do you need them?

To explain the choices that we made, one needs to understand the difference between CI and CD. For those who feel confident that they understand it, skipping the below paragraph is recommended. For those who are suddenly not so sure - read on...

☝ Continuous Integration

Continuous Integration is a practice targeted at maintaining a working "integrated" system consisting of multiple components being worked on in parallel by multiple engineers / engineering teams.

The following diagram demonstrates a simplified Continuous Integration process:

Alt Text

Each of the developers involved is delivering a change in one of the components. The CI constructs an environment consisting of the latest components with the new addition and performs a cycle of automated tests.

The need to implement a Continuous Integration in businesses that depend on their ability to deliver software efficiently and reliably is very obvious. The alternative is straight out painful. Doing periodic "big bang" integrations, bringing together development projects that were run separately and hoping that they would magically run together - it's a huge challenge.

I remember the old days where building a multi-component software server in a large R&D organization would result in a very painful iterative integration time period, during which tons of builds would be delivered to QA and would be pronounced "dead on arrival".

Quite frankly, I cannot think of any good reason why anyone in this day an age would prefer not to use Continuous Integration. Sure, building it effectively would require investing in automated testing, which is yet another thing that just should be done without any reservations.

The only reasonable situation in which CI shouldn't be used, is the case where the organization maintains a significant amount of legacy software components that cannot be validated with automated testing. If this is the case, then, unfortunately, pretty much everything in this article won't apply. Trying to do Continuous Integration without sufficiently reliable automated testing for components is going to generate something of a very little practical value.

✌ Continuous Delivery / Deployment

Continuous Delivery and Continuous Deployment are terms that quite often get used interchangeably. There is a difference between the canonical definitions of the two which is important to note, in order to user the correct term:

  • Continuous Delivery is a software engineering approach in which teams produce software in short cycles, ensuring that the software can be reliably released at any time and, when releasing the software, doing so manually. It aims at building, testing, and releasing software with greater speed and frequency.
  • Continuous Deployment is an approach, where the software, after undergoing a set of automated reliability gates, actually gets deployed in the production environment in a fully automated (and controlled) manner.

While Continuous Delivery may, indeed, be a natural extension to a well-implemented Continuous Integration, Continuous Deployment requires a significant amount of investment, both technological and methodological, in order to implement successfully.

Alt Text

The above diagram depicts the full flow of a Continuous Delivery cycle implemented with Canary Progressive Delivery (one of the possible strategies for a responsible automated update of software components in production environments).

👟 Implementing just CI

In order to implement Continuous Integration successfully, an organization would need to invest in a number of technical infrastructural components, as well as to encourage (or even enforce) certain rules on the "Definition of Done" for Software Developers.

Three main components listed below allowed us to enable efficient CI and gain a consensus (among the members of our engineering team) that it is a valuable asset for the organization.

🤖 Automated Tests

An absolute must for any CI environment, automated testing of your code provides the foundation for an ability to claim that a code change has been successfully integrated.

A lot has been said about automated testing methodologies. At this point, it is enough to state that testing automation should be introduced on multiple levels (and not just either Unit Testing or End-to-End Testing). Layers, such as Module Testing, Contract Testing, Performance Testing and even Automated Security Tests should be considered, depending on the requirements of your project.

It was our experience, that introducing all relevant tests incrementally for every code change and treating these as an inseparable part of the development cycle (rather than separate tasks) ensured that our ability to successfully gate software changes in a dynamic environment keeps the overall system stable.

🌀 CI Pipeline

During the early days of Continuous Integration, when it was still called Software Configuration Management Automation, organizations used to automate software building, testing and packaging tasks writing custom scripts / code. Jenkins was, probably, the first CI and later also CD pipeline tool that has emerged as the main-stream-adopted (almost de-facto standard) for building Automated CI Pipelines.

Additional products, both self-hosted open source ones, as well as SaaS-delivered (Travis CI, Circle CI, Gitlab Pipelines, GitHub Actions, Google Cloud Build and more) have since taken the architecture and the instrumentation of CI lightyears ahead.

📦 Artifacts Repository

A very important component in the pipeline, repository for storing the tested / verified "deliverables". Artifacts Repositories have evolved from a file-system with a predefined structure to a more sophisticated software solutions, capable of delivering the "artifacts" using various Software Package Management / Distribution systems.

👞 Implementing CD on top of CI

Do we need to implement Continuous Integration first, and then consider working on Continuous Delivery? A great article on the subject suggests, that, while this is a conventional approach, it can, actually be done in a different order (with some unorthodox thinking). While this is a valid opinion, all of the Continuous Delivery environments familiar to the author of this article are, actually, built on top of Continuous Integration pipelines.

On top of the components listed above, following ones will need to be added in order to implement Continuous Deployment. Above all, the understanding that every delivery will impact real users of your service is something that needs to be generally acceptable by your engineering team. Engineers should feel confident in delivering changes, knowing that the deployment is done in a responsible manner and that the system can be observed properly to make sure that it functions according to expectations.

👏 Operator / Trigger for Deployments

This is, in fact, the only 100% mandatory component for implementing Continuous Deployment. This is what ensures its continuousness. The goal of this component is to trigger deployment process automatically upon either a completion of a Continuous Integration for a certain change, or upon any other automated trigger.

There are multiple ways to implement such a trigger, both including custom-developed code and open-source projects, such as Flux CD or Spinnaker.

🙌 Progressive Delivery Controller

Progressive Delivery is a methodology that ensures gradual adoption of newly delivered components by some users of the overall system (but not by everyone at once). Coupled with Health Metrics Monitoring, it can be used for a fully automated responsible roll-out.

Depending on the nature of your service, using Progressive Delivery is not a must. If you can replace versions of various components of your software in the production environments without worrying about its impact on the end-users, you don't need to do anything special.

If causing interruptions to current users of your service is a concern, then some sort of a progressive delivery controller is required. Open Source projects, such as Argo Rollouts and Weaveworks Flagger deliver this functionality for Kubernetes environments.

There are various methods for performing Progressive Delivery, such as, but not limited to Blue-Green or Canary upgrade.

👈👉 Traffic Split Manager / Load Balancer

Since at various stages of a Progressive Delivery process both the "Stable" and the "New" versions of the same component will exist simultaneously, something needs to be able to balance traffic between these two. Most Progressive Delivery Controllers are capable of controlling the traffic split management by interacting with notions, such as Load Balancers, Service Mesh controllers, ingress controllers and other components capable of directing North-South, as well as East-West traffic to various instances of destinations.

👍👎 Health Metrics Monitoring

The ability to establish whether your component is "healthy", or whether it is functioning according to expectations, is a mandatory part being able to do automated deployment. Without this ability, your developers are "blind", not knowing whether the system is functioning or not.

The monitoring can be done either using passive methodology (i.e., your component emits various metrics or logs and something monitors whether their data is reflecting a properly functioning system) or using active methodology (periodic probes/sensors are executed against the system, verifying that its response is according to what is expected).

When considering systems for collecting metrics, Prometheus rules supremely as the most widely used open source tool, easily integrated with all the leading Progressive Delivery Controllers.

💫 Tying it all together

Having established what Continuous Integration and Continuous Deployment are, you can now judge the requirements for your specific case. The article above has focused on the technical components that are required to achieve either of these capabilities, mentioning that the engineering teams and their culture are also an important factor.

Deciding on the proper strategy for your software service is up to you. One important piece of advice is not to underestimate the challenge of changing the direction in the middle and its impact not just on the technical debt, but on the engineering team.

Choose wisely...

P.S., if you are interested in how we implemented an end-to-end Canary Continuous Deployment, refer to this excellent blog by Or Elimelech.

Posted on by:

lbelkind profile

Leonid Belkind

@lbelkind

I am a technological entrepreneur on the lookout for ideas that can make an impact.

StackPulse

We help SREs and developers eliminate toil, reduce alert fatigue and improve reliability of software services by analyzing and automating on-call incident response activities.

Discussion

pic
Editor guide
 

While this is a valid opinion, all of the Continuous Delivery environments familiar to the author of this article are, actually, built on top of Continuous Integration pipelines.

As the author of the linked article, I found this critique interesting. I suppose in a sense, that's true--but these "Continuous Integration pipelines" are called that, I believe, mainly because that's their common use case.

I would argue that they should be called more simply "pipelines" (and they often are), as they have far-reaching applications that are not limited to CI.

Even if a brand name (like GitLab-CI or Circle-CI) contains CI, if you don't use it for CI, is it really a CI pipeline? A question for the dictionary authors, I suppose...