How do you define if your application is modern? One of the defining factors is if it utilizes zero-downtime deployments. If you can deploy a new version of your application without your users realizing it, it's a good indicator that your application follows modern practices. In modern, cloud-native environments, it's relatively easy to achieve, however it's not always as simple as deploying a new version of your application and then very quickly switching traffic to it. Some applications may need to finish long-running tasks first. Others will have to somehow deal with not breaking user sessions. The bottom line is that, just like pretty much any technology, you can do basic zero-downtime deployments or more advanced zero-downtime deployments.
In this post, you'll learn about the latter. We'll talk about what rainbow deployments are, and how you can use them for very efficient zero-downtime deployments.
Zero-Downtime Deployments
In order to explain rainbow deployments, we need to have a good understanding of zero-downtime deployments in general. So, what are they? The name gives it away. Zero-downtime deployment is when you release a new version of your application without any downtime. This usually means that you deploy a new version of the application, and users are switched to that new version without even knowing.
Zero-downtime deployments are superior to traditional deployments, where you schedule a "maintenance window" and show a "we are down for maintenance" message to your users for a certain amount of time. In the world of Kubernetes, there are two main ways of completing (near) zero-downtime deployments: Kubernetes' own rolling update deployment, and blue/green deployments. Let's quickly go over both so we'll have a good base of knowledge before diving into the rainbow deployments.
Rolling Update
Kubernetes rolling updates are quite simple yet very effective in many cases. The traditional software update process is usually done by shutting down the old version and then deploying the new version. That, of course, will introduce some downtime.
A Kubernetes rolling update does the opposite. It first deploys a new version of the application right next to the old version, and as soon as the new version is marked as up and running, it automatically switches traffic to the new version—and only then does it delete the old version. Therefore, no downtime.
However, a Kubernetes rolling update has some limitations. Your application needs to be able to handle such a process, you need to think about database access, and it's a very on/off process. Therefore, you don't have any control over when and how gradually the traffic will switch to the new version.
Blue/Green Deployments
Blue/green deployments are next-level deployments that try to answer the limitations of simple rolling updates. In this model, you always keep two deployments (or two clones of the whole infrastructure). One is called blue and one is called green. At any given time, only one is active and serving traffic, while the other one will be idle. And once you need to release an update, you do that on the idle side, test if everything works, and then switch the traffic.
This model is better than a simple rolling update because you have control over switching traffic, and you can have the new version running for a few minutes or even hours so that you can do testing to make sure you won't have any surprises once live traffic hits it.
However, while better than rolling updates, blue/green deployments also have their limitations. The most important is that you're limited to two environments: blue and green. While in most cases that's enough, there are use cases where that would be a limiting factor. For example, if you have long-running tasks such as database migrations or AI processing.
When Blue/Green Is Not Enough
Imagine a situation where you deploy a new version of your long-running software to your blue environment, you test if it's okay, and you make it your live environment. Then you do the same again for the green environment—you deploy a new version there and switch again from blue to green.
So now, if you'd like to deploy a new version again, you'd have to do it on the blue environment. But blue could still be working on that long-running task. You can't simply stop a database migration in the middle because you'll end up with a corrupted database. So you'll have to wait until the software on the blue environment is finished before you can make another deployment. And that's where rainbow deployments come into play.
What Is a Rainbow Deployment?
Rainbow deployment is the next level of deployment methods that solves the limitation of blue/green deployments. In fact, rainbow deployments are very similar to blue/green deployments, but they're not limited to only two (blue and green) environments. You can have as many colorful environments as you like—thus the name.
At Release we use Kubernetes namespaces along with our deployment system to automate the creation and removal of rainbow deployments for your application. Release will automatically create and manage a new namespace for each deployment.
As we said, the working principle of rainbow deployment is the same as blue/green deployments, but you can operate on more copies of your application than just two. So, let's take our example from before, the one about the long-running task. Instead of waiting for the blue environment to finish in order to make another deployment, you can just add another environment. Let's call it yellow.
Then we have three environments: blue, green, and yellow. Our blue is busy, and green is currently serving traffic. So if we want to deploy a new version of our application, we can deploy it to yellow and then switch traffic to it from green. And that's how rainbow deployment works.
This is a very powerful method of deploying applications because it really lets you avoid downtime as much as possible for as many users as possible. Long-running tasks blocking your deployments provide just one example, but there are more use cases. For example, if your application uses WebSockets, no matter how fast and downtime-less your deployments are, you'll still have to disconnect users from their WebSockets sessions, so they'll potentially lose some notifications or other data from your app. Rainbow deployments can solve that problem too. You deploy a new version of your application, and you keep the old one till the users finally disconnect from WebSockets sessions. Then you kill the old version of the application.
How to Do a Rainbow Deployment
Now that you know what rainbow deployments are, let's see how you actually do them. There is no one standard way of achieving rainbow deployments. In fact, there aren't even any tools that you can install that will do rainbow deployments for you. It's more of a do-it-yourself approach. That may seem like bad news, because you can't simply install some tool and benefit from rainbow deployments, but we can leverage the tools we have to enable rainbow deployments with just a few extra lines of logic.
So, how do you do it, then? You use your current CI/CD pipelines. All you need to do is to point whatever network device you're using to a specific "color" of the application when you deploy one. In the case of Kubernetes, this could mean changing the Service or Ingress objects to point to a different deployment. Let's see an example. Below are some very simple and typical Kubernetes deployment and service definitions:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: your_application:0.1
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginxservice
spec:
selector:
app: nginx
ports:
- protocol: TCP
port: 80
name: nginxs
targetPort: 80
We have one deployment and one service that points to that deployment. The service knows which deployment to target based on deployment labels. The service is instructed to search for deployment that has a label app with a value of nginx. But what if we simply target the deployment by color as well? Well, you'd pretty much end up creating a rainbow deployment strategy.
Enter Rainbow Magic
So, your definition would look something like this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment-[color]
labels:
app: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
color: [color]
spec:
containers:
- name: nginx
image: your_application:0.2
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginxservice
spec:
selector:
app: nginx
environment: [color]
ports:
- protocol: TCP
port: 80
name: nginxs
targetPort: 80
And it would be in your CI/CD job to replace [color] in the YAML definition every time you want to deploy a new version. So you deploy your application and service for it. Then the next time you want to deploy a new version of that application, instead of updating the existing deployment, you create a new deployment and update the existing service to point to that new deployment. And you can repeat that process as many times as you want. And once the old deployments aren't needed anymore, you can delete them. This is the working principle of rainbow deployments.
It's also worth mentioning that you don't need to use colors to distinguish your deployments. It can be anything. A common example is git commit hash. Another thing you need to know is that this method isn't exclusive to Kubernetes. You can use it in pretty much any infrastructure or environment as long as you have a way to distinguish and point your network traffic to a specific deployment.
Rainbow Deployment Summary
With ReleaseHub, you have easy access to unlimited environments, so we extended the Blue-green pattern to infinite colors of the rainbow. Each deployment happens in a Namespace, which is a copy of your production environment. Each Namespace gets a color, and you can have as many colors as you need.
Rainbow deployments are sometimes a little bit difficult to understand. It may seem like wasting resources or simply not logical. But they do solve a lot of problems with common deployment methods, and they do bring true benefits to your users. However, it's definitely not a magic solution that will solve all your application problems. Your infrastructure and your application need to be compatible with this approach. Database handling may be especially tricky (for example, you don't want to have two applications writing to the same record in the same database). But these are typical problems that you need to solve anyway when dealing with distributed systems.
Once you improve the user experience, you can also think about improving your developer productivity. If you want to learn more, take a look at our documentation here.
About Release
Release is the simplest way to spin up even the most complicated environments. We specialize in
taking your complicated application and data and making reproducible environments on-demand.
Top comments (0)