DEV Community

Zelar for Zelarsoft

Posted on • Edited on

Simplifying Distributed Applications with Multi-Zone Kuma Service Mesh Deployment

By Amrutha Paladugu
Author LinkedIn: https://www.linkedin.com/in/amrutha-paladugu/

Image description

In the exciting world of distributed applications, juggling network connectivity and security across various environments can be quite a task. But fret not! Kuma, a vibrant open-source service mesh, comes to the rescue with its robust capabilities for managing and safeguarding communication between services. It simplifies the complexity of managing distributed systems and enables developers to focus on building applications without worrying about network-related challenges.

So, buckle up as we dive into the thrilling adventure of deploying a multi-zone global control plane using Kuma and the incredible Kubernetes platform. The multi-zone deployment allows you to distribute your control plane across multiple regions or availability zones, ensuring high availability and resilience.

The Bigger Picture:

The underlying logic of the Kuma multi-zone deployment revolves around establishing a distributed control plane architecture with global and zonal control planes.

• The global control plane is the central authority for the entire service mesh. It manages the overall configuration and policies that apply across all zones.
• Zonal control planes (CP) are deployed in specific zones, such as regions or availability zones. Each zonal CP connects to the global control plane to synchronize information. Zonal CPs provide local processing and handle local traffic efficiently.
• Cross-zone communication is enabled through the global control plane, which manages global policies and synchronizes information between zones.

Image description

The multi-zone mesh deployment process involves the following steps:

• Installing Kubernetes and Kumactl for cluster management and interaction with the control plane
• The global control plane is deployed using Helm, along with PostgreSQL for configuration storage
• Zonal control planes are installed on separate Kubernetes clusters, connected to the global control plane using the KDS (Kuma Discovery Service)
• Mesh configuration is enabled by labeling namespaces and deploying applications within those namespaces
• Cross-zone communication is facilitated by configuring the ZoneIngress and testing connectivity between zones.

Step 1: Install K8s and Install kumactl

The first step is to install Kubernetes (K8s). We are exploring the scenario where Kuma Service Mesh runs on top of Kubernetes, so we need a Kubernetes cluster to deploy our control plane.

To install a K3s cluster, run the following commands:

curl -sfL https://get.k3s.io | sh -
sudo chmod 644 /etc/rancher/k3s/k3s.yaml
mkdir ~/.kube
sudo cp /etc/rancher/k3s/k3s.yaml ~/.kube/config

Next, we must install the kumactl command-line tool, which interacts with the Kuma control plane.

To install kumactl, run the following commands:

curl -L https://kuma.io/installer.sh | VERSION=2.3.0 sh -
cd kuma-2.3.0/bin
PATH=$(pwd):$PATH

Step 2: Deploy a Multi-Zone Global Control Plane:

We will deploy the global control plane on the Kubernetes cluster (this could be on a VM or any cloud) following the steps below:

Step 2.1: Create the Namespace:

Create a Kubernetes namespace for the Kuma system:

kubectl create ns kuma-system

Step 2.2: Install PostgreSQL using Helm:

Use Helm to install PostgreSQL, which is required for storing Kuma's configuration:

helm repo add bitnami https://charts.bitnami.com/bitnami
helm install my-postgresql bitnami/postgresql - version 12.5.8 -n kuma-system

Note the database name, password, user, and host read-write (RW) for later use.

Step 2.3: Define Kubernetes Secrets Manifest

Create a secrets.yaml file to store sensitive information required for connecting to the PostgreSQL database:

apiVersion: v1
kind: Secret
metadata:
name: your-secret-name
type: Opaque
data:
POSTGRES_DB: <Postgres-DB-name>
POSTGRES_HOST_RW: <Postgres-host>
POSTGRES_USER: <Postgres-user>
POSTGRES_PASSWORD: <Postgres-password>

Replace the placeholders with the corresponding base64-encoded values and apply the secrets manifest to the Kuma-system namespace:

kubectl apply -f secrets.yaml -n kuma-system

Step 2.4: Add the Kong-Mesh Repo:

Add the Kuma Helm chart repository:

helm repo add kuma https://kumahq.github.io/charts

Step 2.5: Update Chart Values:

Update the values in the values.yaml file to configure the global control plane:

controlPlane:
environment: "universal"
mode: "global"
secrets:
postgresDb:
Secret: your-secret-name
Key: POSTGRES_DB
Env: KUMA_STORE_POSTGRES_DB_NAME
postgresHost:
Secret: your-secret-name
Key: POSTGRES_HOST_RW
Env: KUMA_STORE_POSTGRES_HOST
postgrestUser:
Secret: your-secret-name
Key: POSTGRES_USER
Env: KUMA_STORE_POSTGRES_USER
postgresPassword:
Secret: your-secret-name
Key: POSTGRES_PASSWORD
Env: KUMA_STORE_POSTGRES_PASSWORD
Enter fullscreen mode Exit fullscreen mode

Note: The controlPlane.environment will be "Kubernetes" in a Kubernetes-based deployment, and controlPlane.mode will be "zone" for the zonal control-plane

Step 2.7: Install global control plane

Install the global control plane using Helm:

helm install kuma -f values.yaml - skip-crds -n kuma-system kuma/kuma

Step 2.8: Find the EXTERNAL-IP and Port:

The global control plane's Kuma Discovery Service (KDS) component is responsible for managing the dynamic configuration updates across the entire service mesh. The KDS communicates with each data plane using a secure gRPC-based protocol. KDS is the external IP of the global-zone-sync service and can be accessed as shown below:

kubectl get services -n kuma-system

 NAMESPACE     NAME                   TYPE           CLUSTER-IP      EXTERNAL-IP      PORT(S)                                                                  AGE
 kuma-system   global-zone-sync     LoadBalancer   10.105.9.10     35.226.196.103   5685:30685/TCP                                                           89s
 kuma-system   kuma-control-plane     ClusterIP      10.105.12.133   <none>           5681/TCP,443/TCP,5676/TCP,5677/TCP,5678/TCP,5679/TCP,5682/TCP,5653/UDP   90s
Enter fullscreen mode Exit fullscreen mode

In this example, the global-kds-address is 35.226.196.103 with port 5685. Note that when the global control plane is on a VM, the public IP of the VM will act as the global-kds-address if you are trying to connect to this control plane from a different network.

Step 3: Set up Zone Control Planes:

We shall install a zonal control plane on a k3s cluster on a different VM. Zone-name is an arbitrary string. This value registers the zone control plane with the global control plane.

helm install kuma \
 - create-namespace \
 - namespace kuma-system \
 - set controlPlane.mode=zone \
 - set controlPlane.zone=<zone-name> \
 - set ingress.enabled=true \
 - set controlPlane.kdsGlobalAddress=grpcs://<global-kds-address>:5685 \
 - set controlPlane.tls.kdsZoneClient.skipVerify=true kuma/kuma
Enter fullscreen mode Exit fullscreen mode

Note that this zonal control plane is already connected to the global control plane as we passed the global kds value to the 'controlPlane. kdsGlobalAddress' variable during installation.

Step 4: Enable Mesh and Deploy a Sample App

One way to enable a mesh for Kubernetes deployments is through labeling a namespace. When you label the namespace with a specific annotation, it instructs Kuma to inject the sidecar proxy into the pods running in that namespace

Step 4.1: Create a Namespace and Enable Sidecar Injection
Create a Kubernetes namespace for your application:

kubectl create ns <name-space>
kubectl label ns <name-space> kuma.io/sidecar-injection=enabled

Any deployments in this labeled namespace are part of the mesh.

Step 4.2: Deploy a Sample App
Deploy a sample app in this namespace using a Debian-based Node.js container image:

kubectl create deploy - image debianmaster/nodejs-welcome <deployment-name> -n <name-space>

Step 4.3: Expose the Deployment as a Service

Expose the deployment as a service using a NodePort type:

kubectl expose deployment <deployment-name> - type=NodePort - name=<service-name> - port=8080 - target-port=8080 -n <name-space>

Note: Verify mesh by inspecting the pod containers.
To access Kuma through GUI or kumactl, we need to first port-forward the API service with the following:

kubectl port-forward svc/kuma-control-plane -n kuma-system 5681:5681

You can run kumactl get zones command, or check the list of zones in the web UI for the global control plane to verify zone control plane connections.

Note: If the kumactl is not connected to the global control plane, run the below command to establish a connection

kumactl config control-planes add --name=<name> --address htto://localhost:5681

When a zone control plane connects to the global control plane, the Zone resource is created automatically in the global control plane.

And then navigate to :5681/gui see the GUI.

You will notice that Kuma automatically creates an mesh entity with name. default.

Step 5: Test Cross-Zone Communication

Replicate the above local zone creation steps and create a second zone.

Step 5.1: Patch the ZoneIngress:

For enabling cross-zone communication in the current scenario where the zones are on different networks, the 'advertisedAddress' in the ingress object needs to be the public IP of the VM where the zone is hosted. This can be done by editing the zonengress (edit ingress object of zone1):

$ kubectl -n kuma-system patch zoneingress "$(kubectl -n kuma-system get zoneingress -o=jsonpath='{.items[0].metadata.name}')" --type='json' -p='[{"op": "replace", "path": "/spec/networking/advertisedAddress", "value": "<publicIP-of-VM>"}]'

Step 5.2: Test the cross-zone connection:

Open a shell prompt ('exec -it') inside a pod/deployment of zone1 and try to connect to services in zone2. Get the service addressPort of zone2 using the command below:

kubectl get serviceinsight all-services-default -oyaml   ## (get the addressPort of requried zone 2service)

kubectl exec -it <zone1-workload-pod-name> sh 
curl http://<zone2-service-addressPort>
Enter fullscreen mode Exit fullscreen mode

Note: If curl is not found inside the pod, install it through apk update.

Conclusion:

Congratulations! You have successfully deployed a multi-zone global control plane using Kuma and Kubernetes. With Kuma's powerful service mesh capabilities, you can manage and secure communication between services across multiple zones. Following the steps outlined in this article, you have learned how to set up the global control plane, connect zonal control planes and enable cross-zone communication. Explore further to leverage the full potential of Kuma and enhance the connectivity and security of your distributed applications.

For more information visit: https://zelarsoft.com/

Top comments (0)