TL;DR: In this tutorial, you will learn how to create, connect and operate three Kubernetes clusters in different regions: North America, Europe and South East Asia.
One interesting challenge with Kubernetes is deploying workloads across several regions.
While you can technically have a cluster with several nodes located in different regions, this is generally regarded as something you should avoid due to the extra latency.
Another popular alternative is to deploy a cluster for each region and find a way to orchestrate them.
But before discussing solutions, let's look at the challenges of a multicluster & multi-cloud setup.
When you orchestrate several clusters, you have to face the following issues:
- How do you decide how to split the workloads?
- How does the networking work across regions?
- What should I do with stateful apps/data?
Let's try to answer some of those questions.
To tackle the first (scheduling workloads), I used Karmada.
With Karmada, you can create deployments with kubectl and distribute them across several clusters using policies.
Karmada takes care of propagating them to the correct cluster.
The project is similar (in spirit) to kubefed.
Karmada uses a Kubernetes cluster as the manager and creates a second control plane that is multicluster aware.
This is particularly convenient because kubectl "just works" and is now multicluster aware.
In other words, you can keep using kubectl, but all the commands can apply resources across clusters and aggregate data.
Each cluster has an agent that issues commands to the cluster's API server.
The Karmada controller manager uses those agents to sync and dispatch commands.
Karmada uses policies to decide how to distribute your workloads.
You could have policies to have a deployment equally distributed across regions.
Or you could place your pods in a single region.
Karmada is essentially a multicluster orchestrator but doesn't provide any mechanism to connect the clusters' networks.
Traffic routed to a region will always reach pods from that region.
But you can use a service mesh like Istio to create a network that spans several clusters.
Istio can discover other instances in other clusters and forward the traffic to other clusters.
But how does the traffic routing work?
For every app in your cluster, Istio injects a sidecar proxy.
All traffic from and to the app goes through the proxy.
The Istio control plane can configure the proxy on the fly and apply routing policies.
In a multicluster setup, Istio instances share endpoints.
When a request is issued, the traffic is intercepted by the proxy sidecar and forwarded to one of the endpoints amongst all endpoints in all clusters.
Since Istio's traffic routing rules let you easily control the flow of traffic and API calls between services, you can have traffic reaching a single region even if pods are deployed in each region.
Or you could create rules to shift traffic from one region to another.
Nice in theory, but does it work in practice?
I built a proof of concept with Terraform so that you can recreate it in 5 clicks here: https://github.com/learnk8s/multi-cluster
And here's a demo of it.
I also installed Kiali to visualise the traffic flowing in the clusters in real time.
If you wish to see this in action, you can watch my demo here.
And finally, if you've enjoyed this thread, you might also like the Kubernetes workshops that we run at Learnk8s https://learnk8s.io/training or this collection of past Twitter threads https://twitter.com/danielepolencic/status/1298543151901155330
Until next time!
Top comments (0)