DEV Community

Cover image for A Guide on Canary Deployments in Kubernetes
DeveloperSteve
DeveloperSteve

Posted on

A Guide on Canary Deployments in Kubernetes

In the world of software development, releasing new versions of applications can be fraught with risks. Ensuring that changes are introduced without disrupting the user experience is of paramount importance.

Canary deployments offer a reliable and efficient way to mitigate these risks by progressively rolling out new features to a small subset of users before making them available to everyone.

In this blog post, we'll delve into the intricacies of implementing canary deployments in Kubernetes and provide code samples to guide you through the process. Let's dive into the world of Kubernetes canary deployments and master the art of progressive releases!

Understanding Canary Deployments

Canary deployments get their name from the historical practice of using canaries in coal mines to detect dangerous gases. Similarly, canary deployments help detect issues in new releases by gradually rolling them out and monitoring their performance. By limiting the exposure of new features to a small percentage of users, developers can identify and address potential problems before they affect the entire user base.

Prerequisites

For the purpose of this guide, we assume that you have a basic understanding of Kubernetes concepts and have access to a running Kubernetes cluster. We will be using a sample application with two versions (v1 and v2) for demonstration purposes.

Creating the Base Deployment and Service

First, let's create a base deployment for our application with the v1 version. Create a deployment.yaml file with the following content:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-app-v1
spec:
  replicas: 3
  selector:
    matchLabels:
      app: my-app
      version: v1
  template:
    metadata:
      labels:
        app: my-app
        version: v1
    spec:
      containers:
      - name: my-app
        image: my-app:v1
        ports:
        - containerPort: 8080
Enter fullscreen mode Exit fullscreen mode

Apply the deployment using kubectl:

kubectl apply -f deployment.yaml
Enter fullscreen mode Exit fullscreen mode

Next, create a service.yaml file for our application:

apiVersion: v1
kind: Service
metadata:
  name: my-app-service
spec:
  selector:
    app: my-app
  ports:
    - protocol: TCP
      port: 80
      targetPort: 8080
  type: LoadBalancer
Enter fullscreen mode Exit fullscreen mode

Apply the service using kubectl:

kubectl apply -f service.yaml
Enter fullscreen mode Exit fullscreen mode

Implementing Canary Deployment

To implement a canary deployment, we'll create a separate deployment for the v2 version of our application with a single replica. Create a canary-deployment.yaml file with the following content:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-app-v2
spec:
  replicas: 1
  selector:
    matchLabels:
      app: my-app
      version: v2
  template:
    metadata:
      labels:
        app: my-app
        version: v2
    spec:
      containers:
      - name: my-app
        image: my-app:v2
        ports:
        - containerPort: 8080
Enter fullscreen mode Exit fullscreen mode

Apply the canary deployment using kubectl:

kubectl apply -f canary-deployment.yaml
Enter fullscreen mode Exit fullscreen mode

Configuring Traffic Splitting

In order to direct a portion of the traffic to the v2 version, we'll use Istio, a popular service mesh for Kubernetes. Ensure that Istio is installed and enabled in your cluster. To configure traffic splitting, create a virtual-service.yaml file with the following content:

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: my-app-virtual-service
spec:
  hosts:
    - my-app-service
  http:
    - route:
        - destination:
            host: my-app-service
            subset: v1
          weight: 90
        - destination:
            host: my-app-service
            subset: v2
          weight: 10
Enter fullscreen mode Exit fullscreen mode

In this configuration, we're directing 90% of the traffic to the v1 version and 10% to the v2 version (our canary release). Apply the virtual service using kubectl:

kubectl apply -f virtual-service.yaml
Enter fullscreen mode Exit fullscreen mode

Monitoring and Observability

Effectively monitoring and observing the performance of canary deployments is a crucial aspect of ensuring their success. By collecting metrics, logs, and traces from both the v1 and v2 versions, you can identify discrepancies or issues that might be introduced by the new release. Kubernetes offers built-in monitoring tools like Prometheus and Grafana to facilitate this process, but third-party observability services can further enhance your monitoring capabilities.

One such third-party service is Lumigo, which provides comprehensive observability for serverless applications, microservices, and Kubernetes clusters. Lumigo enables you to monitor the performance and stability of your canary deployments with real-time insights, helping you detect potential issues early in the process.

To integrate Lumigo with your Kubernetes cluster, you can use the Lumigo Kubernetes Operator. This open-source project is available on GitHub at https://github.com/lumigo-io/lumigo-kubernetes-operator. The Lumigo Kubernetes Operator automates the deployment and management of the Lumigo tracer in your Kubernetes environment, ensuring seamless integration with your existing monitoring setup.

To get started with the Lumigo Kubernetes Operator, follow the installation and configuration instructions provided in the project's GitHub repository. Once the operator is deployed and configured, it will automatically inject the Lumigo tracer into your cluster's pods, enabling end-to-end monitoring of your applications.

With Lumigo, you can gain valuable insights into the performance and health of your canary deployments. The platform provides real-time metrics and visualisations, making it easy to compare the performance of the v1 and v2 versions of your application. If you detect any problems or anomalies, you can quickly roll back the canary release by adjusting the traffic split in the virtual service or by removing the v2 deployment.

Scaling Canary Deployments

Once you've confirmed that the canary release is stable and functioning as expected, you can gradually increase the traffic to the v2 version by adjusting the weights in the virtual service. This allows you to safely roll out the new version to more users, while still keeping an eye on performance metrics.

When you're ready to fully transition to the new version, update the base deployment to use the v2 image and remove the canary deployment:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-app-v1
spec:
  replicas: 3
  selector:
    matchLabels:
      app: my-app
      version: v1
  template:
    metadata:
      labels:
        app: my-app
        version: v1
    spec:
      containers:
      - name: my-app
        image: my-app:v2
        ports:
        - containerPort: 8080
Enter fullscreen mode Exit fullscreen mode

Apply the updated deployment using kubectl:

kubectl apply -f deployment.yaml
Enter fullscreen mode Exit fullscreen mode

Release your Canary deployment into the wild

Canary deployments in Kubernetes provide a reliable and efficient way to progressively roll out new features, allowing developers to identify and resolve issues before they impact the entire user base.

By leveraging Istio and monitoring tools, you can confidently introduce changes to your applications with minimal disruption. Master the art of canary deployments in Kubernetes and take your application releases to new heights!

Happy kubernetesing ☸️

Top comments (0)