DEV Community

Cover image for Mastering Traffic Management: A Comprehensive Istio Lab Guide
Panchanan Panigrahi
Panchanan Panigrahi

Posted on

Mastering Traffic Management: A Comprehensive Istio Lab Guide

Table of Contents πŸ“š


Istio Lab πŸ§ͺ

In this lab, our primary steps involve:

  1. Istio Installation: πŸ› οΈ

    • Initially, we'll install Istio within our k3d cluster, setting up the foundation for service mesh capabilities.
  2. Web Application Deployment: πŸš€

    • Following Istio installation, we'll deploy our web application into the cluster, preparing it for enhanced networking and traffic management.
  3. Istio Configuration: βš™οΈ

    • With the application in place, we'll proceed to configure Istio components, including sidecar injection, gateway setup, virtual service definition, destination rule establishment, and fine-tuning traffic behavior through strategies like splitting and shifting.

These steps will enable us to leverage Istio's powerful features for improved service communication and traffic control within our Kubernetes environment.


Project Overview Through A Diagram: 🎨

Service Mesh Istio


Aim Of The Project

  • In this project, we will deploy a two-tier application consisting of web-frontend and customer-backend.

  • At first, we will deploy web-frontend and version-1 of customer-backend.

  • Then we will test our version-2 of customer-backend deployment using Custom Header, And debug our version-2 deployment.

  • If everything goes well then we will send 10% of our traffic to version-2 customer-backend.

  • If all goes well, Then we will shift our all traffic to our version-2 of customer-backend.


Prerequisites for This Lab

Install K3D πŸ› οΈ

Install K3D with the following command:

curl -s https://raw.githubusercontent.com/rancher/k3d/main/install.sh | bash
Enter fullscreen mode Exit fullscreen mode

Create k3d cluster with one master and 2 worker nodes:

k3d cluster create my-cluster --agents 2
Enter fullscreen mode Exit fullscreen mode

Install istioctl πŸ› οΈ

  • To install istioctl on Linux, you can use the following steps. These instructions assume you are using a Bash shell.

      # Download the latest release of istioctl
      curl -L https://istio.io/downloadIstio | sh -
    
      # Add the istioctl to your PATH
      export PATH=$HOME/.istioctl/bin:$PATH
    
      # pre check our istioctl
      istioctl x precheck
    
    • If you want to make the change permanent, you can add the export line to your shell profile configuration file.
      echo 'export PATH=$HOME/.istioctl/bin:$PATH' >> 
      ~/.zshrc
      source ~/.zshrc
    

Install Istio Using Istioctl

installing Istio really easy. Use the following command

istioctl install -y
Enter fullscreen mode Exit fullscreen mode

Verify that Istio is installed
to verify use this command istioctl verify-install and the output should look like βœ” Installation complete


Install Kiali for Visualization: πŸ“Š

kubectl apply -f https://raw.githubusercontent.com/istio/istio/release-1.20/samples/addons/kiali.yaml
Enter fullscreen mode Exit fullscreen mode

Enable Automatic Sidecar Injection πŸ”§πŸ”

For automatic sidecar injection, label the default namespace:

kubectl label namespace default istio-injection=enabled
Enter fullscreen mode Exit fullscreen mode

Verify that the label has been applied:

kubectl get ns -L istio-injection
Enter fullscreen mode Exit fullscreen mode

Deploy Web Application πŸš€πŸŒ

The web application consists of two parts: web frontend and customers backend.

Kubernetes YAML Files for Web Frontend:

Create a file named web-frontend.yaml with the following content:

---
# Service Account for the web-frontend application
apiVersion: v1
kind: ServiceAccount
metadata:
  name: web-frontend

---
# Deployment configuration for the web-frontend application
apiVersion: apps/v1
kind: Deployment
metadata:
  name: web-frontend
  labels:
    app: web-frontend
spec:
  replicas: 1
  selector:
    matchLabels:
      app: web-frontend
  template:
    metadata:
      labels:
        app: web-frontend
        version: v1
    spec:
      # Assign the ServiceAccount to the Pod
      serviceAccountName: web-frontend
      containers:
      - name: web
        # Docker image for the web-frontend application
        image: gcr.io/tetratelabs/web-frontend:1.0.0
        # Always pull the latest image
        imagePullPolicy: Always
        ports:
        - containerPort: 8080
        env:
        - name: CUSTOMER_SERVICE_URL
          # URL for the customer service within the Kubernetes cluster
          value: "http://customers.default.svc.cluster.local"

---
# Service configuration for the web-frontend application
kind: Service
apiVersion: v1
metadata:
  name: web-frontend
  labels:
    app: web-frontend
spec:
  selector:
    app: web-frontend
  ports:
  - port: 80
    name: http
    # Expose the containerPort 8080 of the Pods
    targetPort: 8080
Enter fullscreen mode Exit fullscreen mode

Apply the YAML file to your Kubernetes cluster:

kubectl apply -f web-frontend.yaml
Enter fullscreen mode Exit fullscreen mode

Kubernetes YAML Files for Customers Backend:

Create a file named customers.yaml with the following content:

---
# Service Account for the customers application
apiVersion: v1
kind: ServiceAccount
metadata:
  name: customers

---
# Deployment configuration for the customers application (version: v1)
apiVersion: apps/v1
kind: Deployment
metadata:
  name: customers-v1
  labels:
    app: customers
    version: v1
spec:
  replicas: 1
  selector:
    matchLabels:
      app: customers
      version: v1
  template:
    metadata:
      labels:
        app: customers
        version: v1
    spec:
      # Assign the ServiceAccount to the Pod
      serviceAccountName: customers
      containers:
      - image: gcr.io/tetratelabs/customers:1.0.0
        # Always pull the latest image
        imagePullPolicy: Always
        name: svc
        ports:
        - containerPort: 3000

---
# Service configuration for the customers application
kind: Service
apiVersion: v1
metadata:
  name: customers
  labels:
    app: customers
spec:
  selector:
    app: customers
  ports:
  - port: 80
    name: http
    # Expose the containerPort 3000 of the Pods
    targetPort: 3000
Enter fullscreen mode Exit fullscreen mode

Apply the YAML file to your Kubernetes cluster:

kubectl apply -f customers.yaml
Enter fullscreen mode Exit fullscreen mode

Validate your pods:

kubectl get pod
Enter fullscreen mode Exit fullscreen mode

Istio Ingress Gateway 🌐πŸšͺ

View the Istio Ingress Gateway pod in the istio-system namespace:

kubectl get pod -n istio-system
Enter fullscreen mode Exit fullscreen mode

Note the external IP address for the load balancer:

kubectl get svc -n istio-system
Enter fullscreen mode Exit fullscreen mode

Assign it to an environment variable:

export GATEWAY_IP=$(kubectl get svc -n istio-system istio-ingressgateway -ojsonpath='{.status.loadBalancer.ingress[0].ip}')
Enter fullscreen mode Exit fullscreen mode

Configuring Ingress βš™οΈπŸ”€

Create a Gateway Resource

Create a Gateway resource using the following specification:

---
# This YAML defines an Istio Gateway named frontend-gateway for handling incoming HTTP traffic.

apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
  name: frontend-gateway
spec:
  # Selector for the Istio Ingress Gateway
  selector:
    istio: ingressgateway
  # List of servers and their configurations
  servers:
  - 
    # Port configuration for HTTP
    port:
      number: 80
      name: http
      protocol: HTTP
    # Hosts to which this gateway is applicable
    hosts:
    - "*"
Enter fullscreen mode Exit fullscreen mode

Apply the gateway resource to your cluster:**

kubectl apply -f gateway.yaml
Enter fullscreen mode Exit fullscreen mode

Attempt an HTTP request:

curl -v http://$GATEWAY_IP/
Enter fullscreen mode Exit fullscreen mode

It should return a 404: not found.


Create a VirtualService Resource For web-frontend

Create a VirtualService resource using the following specification:

---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: web-frontend
spec:
  hosts:
    - "*"  # Matches all hosts
  gateways:
    - frontend-gateway  # Associates with the frontend-gateway
  http:
    - route:
        - destination:
            host: web-frontend.default.svc.cluster.local
            port:
              number: 80  # Routes to port 80 of the specified host
Enter fullscreen mode Exit fullscreen mode

Apply the VirtualService resource to your cluster:

kubectl apply -f web-frontend-virtualservice.yaml
Enter fullscreen mode Exit fullscreen mode

List virtual services in the default namespace:

kubectl get virtualservice
Enter fullscreen mode Exit fullscreen mode

Traffic Management πŸš¦πŸ”€

Here we have two sections traffic splitting, and traffic shifting.

Traffic Splitting 🚦➑️ πŸ‘₯

Version 2 of the customer service has been developed, and it's time to deploy it to production. Whereas Version 1 returned a list of customer names, Version 2 also includes each customer's city.


Deploying customers (V1)

We are ready to deploy the new service, but we're holding off on directing traffic to it for now.

For a structured approach, let's distinctly handle the deployment of the new service and the traffic-directing tasks.

The customer service is labeled with app=customers.

To validate the labels in the customers.yaml file, use the following command:

kubectl get pod -L app,version
Enter fullscreen mode Exit fullscreen mode

Note the selector on the customers service in the output of the following command:

kubectl get svc customers -o wide
Enter fullscreen mode Exit fullscreen mode

If we were to deploy only v2, the selector would match both versions.


DestinationRules for customers backend

We can inform Istio that two distinct subsets of the customers service exist, and we can use the version label as the discriminator.

create a file named customers-destinationrule.yaml and add the following yaml code there:

---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
  name: customers
spec:
  host: customers.default.svc.cluster.local
  subsets:
    - name: v1
      labels:
        version: v1
    - name: v2
      labels:
        version: v2
Enter fullscreen mode Exit fullscreen mode
  1. Apply the above destination rule to the cluster.
  2. Verify that it's been applied. bash kubectl get destinationrule

VirtualServices for customers backend

Armed with two distinct destinations, the VirtualService Custom Resource allows us to define a routing rule that sends all traffic to the v1 subset.

create a file named customers-virtualservice-v1.yaml

and add the following yaml code there:

---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: customers
spec:
  hosts:
  - customers.default.svc.cluster.local
  http:
  - route:
    - destination:
        host: customers.default.svc.cluster.local
        subset: v1
Enter fullscreen mode Exit fullscreen mode

Above, note how the route specifies subset v1.

  1. Apply the virtual service to the cluster.
  2. Verify that it's been applied. bash kubectl get virtualservice We can now safely proceed to deploy v2, without having to worry about the new workload receiving traffic.

Finally deploy customers (v2)

create a file named customers-v2.yaml.

Apply the following Kubernetes deployment to the cluster for creating version-2 of our customer-backend deployment:

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: customers-v2
  labels:
    app: customers
    version: v2
spec:
  replicas: 1
  selector:
    matchLabels:
      app: customers
      version: v2
  template:
    metadata:
      labels:
        app: customers
        version: v2
    spec:
      serviceAccountName: customers
      containers:
        - image: gcr.io/tetratelabs/customers:2.0.0
          imagePullPolicy: Always
          name: svc
          ports:
            - containerPort: 3000
Enter fullscreen mode Exit fullscreen mode

Note:

Check that traffic routes strictly to v1.

  1. Generate some traffic, by refreshing the web page a few times.
  2. Open a separate terminal and launch the Kiali dashboard.

    istioctl dashboard kiali
    
  3. Take a look at the graph, and select the default namespace. The graph should show all traffic going to v1.


Route to customers, (v2)

We wish to proceed with caution. Before customers can see version 2, we want to make sure that the service functions properly.

Expose "debug" traffic to v2

create a file called customers-vs-debug.yaml, and add the following yaml code into it.

---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: customers
spec:
  hosts:
  - customers.default.svc.cluster.local
  http:
  - match:
    - headers:
        user-agent:
          exact: debug
    route:
    - destination:
        host: customers.default.svc.cluster.local
        subset: v2
  - route:
    - destination:
        host: customers.default.svc.cluster.local
        subset: v1
Enter fullscreen mode Exit fullscreen mode
  • We are telling Istio to check an HTTP header: if the user-agent is set to debug, route to v2, otherwise route to v1.

  • Open a new terminal and apply the above resource to the cluster; it will overwrite the currently defined VirtualService as both yaml files use the same resource name.

    kubectl apply -f customers-vs-debug.yaml
    

Test the debug VS

Open a browser and visit the application.

We can tell v1 and v2 apart in that v2 displays not only customer names but also their city (in two columns).

The user-agent header can be included in a request by the following command:

curl -H "user-agent: debug" http://$GATEWAY_IP
Enter fullscreen mode Exit fullscreen mode

Note: If you refresh the page a good dozen times and then wait ~15-30 seconds, you should see some of that v2 traffic appear in Kiali.


Now Canary Deployment for our (V2) customers backend

Well, v2 looks good; we decided to expose the new version to the public, but we're still prudent.

Start by siphoning 10% of traffic over to v2.

create a file named customers-vs-canary.yaml and add the following yaml code there.

---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: customers
spec:
  hosts:
  - customers.default.svc.cluster.local
  http:
  - route:
    - destination:
        host: customers.default.svc.cluster.local
        subset: v2
      weight: 10
    - destination:
        host: customers.default.svc.cluster.local
        subset: v1
      weight: 90
Enter fullscreen mode Exit fullscreen mode

Above, note the weight field specifying 10 percent of traffic to subset v2. Kiali should now show traffic going to both v1 and v2.

  • Apply the above resource. bash kubectl apply -f customers-vs-canary.yaml
  • In your browser: undo the injection of the user-agent header, and refresh the page a bunch of times.

  • In Kiali, under the Display pulldown menu, you can turn on "Traffic Distribution", to view the relative percentage of traffic sent to each subset.

  • Most of the requests still go to v1, but some (10%) are directed to v2.

If all looks good, up the percentage from 90/10 to, say 50/50.

If everything goes well, then we should switch all traffic over to v2.


Traffic Shifting πŸš¦πŸ”„

Finally, switch all traffic over to v2.

create a file called customers-virtualservice-final.yaml, and add the following yaml code there.

---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: customers
spec:
  hosts:
  - customers.default.svc.cluster.local
  http:
  - route:
    - destination:
        host: customers.default.svc.cluster.local
        subset: v2
Enter fullscreen mode Exit fullscreen mode
  • now apply the above file: bash kubectl apply -f customers-virtualservice-final.yaml

After applying the above resource, go to your browser and make sure all requests land on v2 (two-column output). Within a minute or so, the Kiali dashboard should also reflect the fact that all traffic is going to the customers v2 service.


Conclusion ✨

Congratulations! You've successfully navigated the Istio Lab and learned how to set up Istio, deploy a web application, and configure advanced traffic management. By mastering Istio's features, you can enhance the reliability, security, and scalability of your microservices-based applications. Happy coding! πŸŽ‰πŸ‘


Acknowledgment πŸ™

A heartfelt thanks to Tetrate Academy for their exceptional "Istio Workshop 0-60." This lab played a pivotal role in shaping the content of this guide. Kudos to Tetrate Academy for their dedication to enhancing Istio's expertise and making complex concepts accessible. Grateful for your invaluable contributions! 🌟

I extend my sincere gratitude to all the readers who have dedicated their valuable time and exhibited patience in exploring this content. Your commitment to learning and understanding is truly appreciated.πŸ™Œ

Top comments (0)