DEV Community

Zaffere
Zaffere

Posted on

Deployment Strategies

Choosing the right deployment strategy can be tricky especially when considering zero downtime for our end users.

Traditional deployments were typically done in the middle of the night (when we assume most of our users are fast asleep). It requires us to take down our servers, add our new feature/code and restart our servers again. Although this strategy could work for some of us and its the simplest, you can already see 2 obvious trade-offs.

  1. Deployments must happen at a specific time (typically dead in the night!)

  2. Deployments may take a long time, so depending on its duration, NO users are able to use our software as we are deploying (downtime!)

Deployment Strategies

Here I will share 3 of the most common deployment strategies together with its trade-offs.

Blue/Green Deployment

With the Blue/Green deployment strategy, we have two identical environments deployed on our servers sitting behind a load balancer. Each environment is independent of each other but share the same exact configurations.

During our pre-release phase, either one of these environments would hold the current version with the previously up to date features. All our services are served from this server and all our users are routed to this server from the load balancer.
(Green is the most updated in this case)

pre-release-image

Now we have a new feature ready to go to production. Instead of deploying our new feature to the green server (currently live), we deploy our new feature to the blue server, the one that is currently idle. This means we now have two servers that are "live", with our green server serving our users in production with the old features while the blue server is still sitting idle with our latest features.

We can complete our final testing on the blue server and once done, we can reconfigure our load balancers to route production traffic to the blue server instead.

post-release-image

Live users can now see the new feature and good news was that we never had to bring our servers down when adding new features.

And if there were any bugs discovered in our newly live feature, we can simply reconfigure our load balancers to route users to the the previous server

Pros:

  • Zero/Minimal downtime

  • Fairly straight forward to implement

  • Easy rollbacks if new feature fails

  • Allows for testing in a live environment

Cons:

  • May be expensive having two live servers

  • Can get tricky at scale

Canary Deployment

Canary deployment are very similar to blue-green deployment, but uses a slightly different approach. Here we only have one environment but multiple nodes. This deployment strategy is common when using Kubernetes but can also be done without.

Let's take for example a 3 node replica of a service. We have 3 instances running live with its load distributed evenly among them.

They each have the same exact version of the service.

canary-pre-release-image

This time, when we have a new feature available to go live, we deploy it to a single node. We then route only a small percentage of our users to the newly deployed feature.

live-testing-phase-image

This gives us the ability to let a very small percentage of our users do a live "test" of the feature before rolling it out to 100% of our users.

Now when we have determined that the new feature is safe to use with no bugs detected, we deploy our new feature to all other nodes and simply split the load of the service evenly again.

post-release-image

Pros

  • Zero/minimal downtime

  • Flexibility to experiment with new features

  • Less costly than blue-green method

  • Quick rollback

Cons

  • Can get complex to set up

  • Needs observability metrics

Feature Flagging

Feature flagging is another common concept when it comes to deploy/release of new features.

With feature flagging (also commonly known as feature toggles) you can toggle on/off any feature available to users during runtime. This can be as simple as conditionals determining the code path that will be executed.

const features = {
    'newFeature': true,
}

if (features.newFeature) {
    RenderNewFeature()
} else {
    RenderOldFeature()
}
Enter fullscreen mode Exit fullscreen mode

Feature flagging allows you to bundle multiple releases into a single deploy but based on flags, only made available to a subset of users (or none at all depending on your requirements) hence in a way, reduces the total number of deployments.

Control of when and who can use your latest feature is entirely up to you now with a simple toggle.

Pros

  • Adding/removing a feature is extremely simple

  • Reduced deployments as you can now deploy multiple features at once but only release those required at the moment

  • No need to maintain long-lived feature branches. We can directly push our new features to the main branch and toggle wether or not to release it

  • If a feature is causing problems, flipping the flag off will disable it, solving the problem immediately without the need to roll back a complex release

Cons

  • You may need to know in advanced what features are to be included per deployment

  • If not implemented properly, code can get chunky

  • Adding more feature flags makes testing and management exponentially more difficult over time. It becomes very hard to tell which flags are necessary or obsolete

Summary

There is no one-fits-all solution and as with everything, there are trade-offs that we have to consider before choosing the right deployment strategy. From how we are serving our applications to the problems that we are trying to solve. These are just some of the solution I have worked with and thought it'd be nice to share with the community.

Top comments (0)