DEV Community

Hollow Man
Hollow Man

Posted on

Deploy a Kubernetes Cluster based on Calico and openSUSE Kubic

Introduction

openSUSE Kubic is a certified Kubernetes Distribution based on openSUSE MicroOS. Calico is an open-source project that can be used by Kubernetes to deploy a pod network to the cluster. In this blog, I will show you how to deploy a Kubernetes Cluster based on Calico and openSUSE Kubic by a Virtual Machine. We are going to deploy a cluster that has a master and a worker.

I was intended to use Oracle VM VirtualBox. However, it turned out that on my machine, when I tried to run kubeadm at openSUSE Kubic in VirtualBox, it always stuck at watchdog: BUG: soft lockup - CPU#? stuck for xxs! with CPU usage around 100%. As a result, I switched to VMware Workstation Pro and the issue got solved. Guess it's caused by some bugs of VirtualBox.

Steps

Create the Virtual Machine and Install openSUSE Kubic

Here I won't explain how to do these things but share some important things to note, just refer to their documents if you don't know or have any questions.

  • Here is my configuration for the Virtual Machine. Recommend that your host machine has a memory that is larger than 8GB so that more than 3GB of memory can be assigned to the Virtual Machine for it to run smoothly. In order for the Virtual Machines to be connected to each other, and also connect to the Internet, you can set the Network Adapter to be Bridged (Automatic).

  • For openSUSE Kubic Installation, remember to choose kubeadm Node when it comes to System Role, as it will deploy a Weave pod network cluster instead of Calico if you choose to use Kubic.

  • I suggest that you can install the openSUSE Kubic in one Virtual Machine, later after successful installation, clone that Virtual Machine, assign one as master and another worker. Remember to do a full clone.

Configuring the Master

When you boot into the master Virtual Machine, you can see your IP address in the notification part. In my case, it's 192.168.1.14. Take note of that.


For the convenience of copy and paste commands, we can use SSH to log into the system. To configure that, first, log into the system with root account. Second, execute vi /etc/ssh/sshd_config.d/10-enable-root-password.conf, type i to insert, write the following into the file:

PasswordAuthentication yes
PermitRootLogin yes
Enter fullscreen mode Exit fullscreen mode

This will enable SSH root password login, although it's not recommended if you are in production. When editing is finished, press [ESC] then type :wq to save and exit.


Kubeadm Init

Run kubeadm config images pull to pull the container images required for Kubernetes.

You can also specify the --image-repository if, in your location, registry.opensuse.org downloading speed is too slow. In my case (in China) I'll use Aliyun to speed up: kubeadm config images pull --image-repository registry.aliyuncs.com/google_containers

Then run kubeadm init --apiserver-advertise-address=<Your Master IP Address> --pod-network-cidr=192.168.0.0/16, replace <Your Master IP Address> with the IP address you just noted. If you specified the --image-repository in the last step, also append that to this command.

Wait for it to finish, remember to take notes of the worker nodes joining command.

Execute export KUBECONFIG=/etc/kubernetes/admin.conf in your shell for the kubectl to work.

Deploy Calico

Get the latest copy of the calico configuration yaml file by curl -O https://docs.projectcalico.org/manifests/calico.yaml.

Change the path to install the FlexVolume driver by sed -i 's#/usr/libexec/kubernetes/kubelet-plugins/volume/exec#/var/lib/kubelet/volume-plugin#g' calico.yaml as in Transactional (Atomic) systems /usr/libexec/kubernetes is read-only.

Finally, apply the yaml file by kubectl apply -f calico.yaml.

Wait for all the pods to be available.

watch kubectl get pods --all-namespaces
Enter fullscreen mode Exit fullscreen mode

You can check the Events of the pod to get the error messages if you are waiting too much time on a specific pod: kubectl describe pods -n kube-system <Pod Name>.

Configuring the Worker

Start the worker Virtual Machine, login as root, change the host name as it can't be the same with the master: hostnamectl set-hostname 'worker'.

Finally, execute the worker nodes joining command just noted, ignoring the hostname could not be reached warnings since we didn't and don't need to configure the DNS.

Then wait for the worker to be available, Done!

Top comments (0)