Quite recently, I was tasked with creating a production ready Kubernetes cluster from the ground up. This task gave me a deep and practical insight into the technical structure and setup of Kubernetes. Though it took me a while, I was able to set it up as requested and as per documentation.
Moving forward with the knowledge gained, I decided to set up an AWS EKS Cluster which I would use for some application deployments. This next tasks would center around deployments and persisting data in k8s. However, this article is only a guide to provisioning your own AWS EKS Cluster using IaC(Terraform). It is advisable to do this first with the console if you don't already have knowledge as to how it all works.
Terraform is the IaC tool for this guide(knowledge of terraform is a prerequisite for this process), below are the steps to follow in setting up your EKS Cluster using terraform on your control machine;
- Setup a controller client machine(can be an EC2 instance or your local machine)
- Install Terraform, Kubectl, AWSCLI, aws-iam-authenticator
Clone down the Github Repo into your controller client machine
Feel free editing to your preference, things such as Region, names, instance type for the worker nodes, version of Kubernetes among others.
terraform initto initialize the folder,
terraform planto see what resources would be created. 18 resources would be created in building this cluster.
terraform applyto create the resources. It would take some 8-10 min for the cluster to be built and ready to go.
Now the cluster is built, you have to update the --kubeconfig with the awscli command
aws eks --region <region> update-kubeconfig --name <cluster-name>
Test that everything works by running a
kubectlcommand(any kubectl command) to show the nodes in your cluster
kubectl get nodes -o wide