Kubernetes Playground in ACloudGuru
With the advent of Technology in Cloud, learning and to up-skill yourself in its ecosystem has kind of become a necessity and best way to skill up to is always get hands on practice in those cloud providers, majorly dominated by AWS, Azure and GCP. But you always run the risk of forgetting the servers or cloud resources you have spun up for learning purpose and then you see a shocking bill delivered to your inbox (of course you could control the cost with AWS Budgets and limit the bill amount). But what if you have the liberty to spin up either resources in your major cloud provider or setup your own dedicated server for couple of weeks to learn and then destroy.
That's exactly what comes bundled with ACloudGuru membership.
- Cloud Sandbox (provided for AWS, Azure and GCP)
- Cloud Servers
I used these features extensively during my preparation for AWS and Kubernetes Certifications.
During preparation for Kubernetes CKAD and CKA you need to create Cluster more than often to perform various tasks like -
- Creating Deployment manifest files and deploying it.
- Creating various kubernetes objects and edit it again and again to understand the behavior.
- Upgrading Kubernetes version in a LIVE cluster, Backing up ETCD
Now to practice all these during preparation, you would like to setup a dedicated cluster for couple of weeks or more and continue your journey, and that's what I did. I created a Kubernetes cluster with right version needed for the exam to continue my practice.
In this blog I am going to discuss how you could leverage Cloud Servers as your Kubernetes Playground. Let's get started -
Create Cloud Serves:
Navigate to -
Playground → Cloud Servers → Click "New Server"
Create 3 such Cloud Servers to create a cluster with 1 Master + 2 Worker Nodes.
Wait for all 3 Servers to be in 'Ready' state to connect to these servers. Please use "Tags" to understand which of these is your Master Node. Once all the servers are UP, you get this view -
Expand these server to see its details and its login credentials. On your first login, it will prompt you to change your password.
During my process of installing Docker, I experienced getting below error.
E: Could not get lock /var/lib/dpkg/lock – open (11: Resource temporarily unavailable)
E: Unable to lock the administration directory (/var/lib/dpkg/), is another process using it?
You get this error when some other process is trying to update Ubuntu and when it updates the system, it locks dpkg file (Debian package manager).
Either wait Or follow "Method 2" mentioned in this site to release locks.
Install Kubernetes with Deployment Tools
We can bootstrap clusters using either - Kubeadm, Kops, or Kubespray tools.
We will use Kubeadm tool to setup our cluster. [Ref]
Letting iptables see bridged traffic
"**EXECUTE BELOW COMMANDS IN "ALL MACHINES" AS* sudo user ~ "sudo -i"*
sudo modprobe br_netfilter
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sudo sysctl --system
Install Container Runtime (Docker) [Ref]
"**EXECUTE BELOW COMMANDS IN "ALL MACHINES" AS* sudo user ~ "sudo -i"*
# (Install Docker CE)
# Set up the repository:
# Install packages to allow apt to use a repository over HTTPS
sudo apt-get update && sudo apt-get install -y \
apt-transport-https ca-certificates curl software-properties-common gnupg2
# Add Docker's official GPG key:
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
# Add the Docker apt repository:
sudo add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) \
stable"
# Install Docker CE
sudo apt-get update && sudo apt-get install -y \
containerd.io=1.2.13-2 \
docker-ce=5:19.03.11~3-0~ubuntu-$(lsb_release -cs) \
docker-ce-cli=5:19.03.11~3-0~ubuntu-$(lsb_release -cs)
# Set up the Docker daemon
cat <<EOF | sudo tee /etc/docker/daemon.json
{
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2"
}
EOF
sudo mkdir -p /etc/systemd/system/docker.service.d
# Restart Docker
sudo systemctl daemon-reload
sudo systemctl restart docker
If you want the docker service to start on boot, run the following command:
sudo systemctl enable docker
Install kubeadm, kubelet and kubectl [Ref]
-
kubeadm
: the command to bootstrap the cluster. -
kubelet
: the component that runs on all of the machines in your cluster and does things like starting pods and containers. -
kubectl
: the command line util to talk to your cluster.
"**EXECUTE BELOW COMMANDS IN "ALL MACHINES" AS* sudo user ~ "sudo -i"*
***Please note that below command will always install latest packages.***
sudo apt-get update && sudo apt-get install -y apt-transport-https curl
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
cat <<EOF | sudo tee /etc/apt/sources.list.d/kubernetes.list
deb https://apt.kubernetes.io/ kubernetes-xenial main
EOF
sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl
-
To install a specific version of kubeadm, kubelet, kubectl:
sudo apt-get install -y kubelet=1.17.0 kubeadm=1.17.0 kubectl=1.17.0
Restart kubelet:
systemctl daemon-reload
systemctl restart kubelet
Create Cluster with kubeadm tool
Initialize the Control Plane Node [Ref]
"**EXECUTE BELOW COMMAND IN "MASTER" AS* sudo user ~ "sudo -i"*
kubeadm init
# this command takes few minutes to bootstrap the cluster and install control
# plane components in master node.
Above command will provide 2 important information.
- Details of cluster api-server to be added to .kubeconfig as regular (non-root) user
- Command to be execute in worker nodes (as root user) to allow worker nodes to join the cluster.
To make kubectl work for your non-root user, run these commands, which are also part of the kubeadm init output:
"**EXECUTE BELOW COMMANDS IN "MASTER" AS* non-root user*
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Install POD networking solution [Ref]
In our case we will use "weaveworks" CNI plugin.
"**EXECUTE BELOW COMMAND IN "MASTER" AS* non-root user*
kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"
Join Worker Nodes to Cluster [Ref]
As part of "kubeadm init" you must have received a "kubeadm join ..." command. Now, we need to execute those in worker nodes so that we complete the cluster creation process.
"**EXECUTE BELOW COMMAND IN "WORKER" NODES ONLY AS* sudo user ~ "sudo -i"*
kubeadm join --token <token> <control-plane-host>:<control-plane-port> --discovery-token-ca-cert-hash sha256:<hash>
Wait for few minutes to allow nodes to join the cluster. Now, you can execute below command (from master node) to see your nodes joining the cluster.
kubectl get nodes
Deploy sample application to test your cluster deployment
kubectl run nginx --image=nginx
# output should be - pod/nginx created
# Get deployed pod details
kubectl get pods -o wide
This completes the process to bootstrap your own Kubernetes Cluster in the Playground. 😎
Top comments (0)