DEV Community

Cover image for Create a Multi-Cloud Setup of Kubernetes cluster

Create a Multi-Cloud Setup of Kubernetes cluster

Author: Vrukshali Torawane is Pursuing bachelors in Computer Science and Engineering. Also, she is an intern at data on kubernetes community. She Own the certifications of Automation with Ansible and Specialist in Containers and Kubernetes. She is a cloud and Devops enthusiast

CREATE A MULTI-CLOUD SETUP of K8S cluster:

  1. Launch node in AWS
  2. Launch node in Azure
  3. Launch node in GCP
  4. One node on the cloud should be Master Node
  5. Then set up multi-node Kubernetes cluster.

So to do this task, I have created three nodes :
Master node on AWS, Slave nodes on AWS, Azure, and GCP.

Let’s start:

First: Setting up Kubernetes master on AWS:

Image description

Step-1: For installing kubelet, kubeadm, kubectl first, we need to set up a repo for this :

vim /etc/yum.repos.d/k8s.repo
# content inside repo k8s.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-\$basearch
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg


Enter fullscreen mode Exit fullscreen mode

Step-2: Installing required software :

yum install docker kubelet kubeadm kubectl iproute-tc -y

Enter fullscreen mode Exit fullscreen mode

Step-3: Starting and enabling services :

systemctl enable --now docker
systemctl enable --now kubelet

Enter fullscreen mode Exit fullscreen mode

Step-4: We also need to pull docker images using kubeadm. It pulls images of the config files.

kubeadm config  images pull

Enter fullscreen mode Exit fullscreen mode

Step-5: Now, we need to change the docker cgroup driver into systemd

vim /etc/docker/daemon.json
{  
"exec-opts": ["native.cgroupdriver=systemd"]
} 

Enter fullscreen mode Exit fullscreen mode

Step-6: Since we have made changes in docker, we need to restart the docker service :

systemctl restart docker

Enter fullscreen mode Exit fullscreen mode

Step-7: Setting up a network bridge to 1 :

echo "1" > /proc/sys/net/bridge/bridge-nf-call-iptable

Enter fullscreen mode Exit fullscreen mode

Step-8: The important step is while initializing Master, *while running preflight the main thing is we need to associate the token to the public IP of instance, so that any of the other nodes can easily connect, so for this use :
*

--control-plane-endpoint=<PUBLIC_IP>:6443
kubeadm init --pod-network-cidr=10.244.0.0/16 --control-plane-endpoint=<public_ip>:6443 --ignore-preflight-errors=NumCPU          --ignore-preflight-errors=Mem

Enter fullscreen mode Exit fullscreen mode

Step-9: Now, make a directory for Kube config files and give permission to them :

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Enter fullscreen mode Exit fullscreen mode

Step-10: Apply flannel :

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
Enter fullscreen mode Exit fullscreen mode

Step-11: Final step: Generate token so that slave nodes could connect to master node :

kubeadm token create --print-join-command

Enter fullscreen mode Exit fullscreen mode

Second: Setting up Kubernetes nodes on AWS, Azure, GCP :
(Note: Follow same steps in all the three platforms)

πŸ‘‰πŸ» AWS

Image description

πŸ‘‰πŸ» Azure

Image description

πŸ‘‰πŸ» GCP

Image description

Step-1: For installing kubelet, kubeadm, kubectl first, we need to set up a repo for this :

vim /etc/yum.repos.d/k8s.repo
# content inside repo k8s.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-\$basearch
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg

Enter fullscreen mode Exit fullscreen mode

Step-2: Installing required software :

yum install docker kubelet kubeadm kubectl iproute-tc -y

Enter fullscreen mode Exit fullscreen mode

Step-3: Starting and enabling services :

systemctl enable --now docker
systemctl enable --now kubelet

Enter fullscreen mode Exit fullscreen mode

Step-4: We also need to pull docker images using kubeadm. It pulls images of the config files :

kubeadm config  images pull

Enter fullscreen mode Exit fullscreen mode

Step-5: Now, we need to change the docker cgroupdriver into systemd :

vim /etc/docker/daemon.json
{  
"exec-opts": ["native.cgroupdriver=systemd"]
}

Enter fullscreen mode Exit fullscreen mode

Step-6: Since we have made changes in docker, we need to restart the docker service :

systemctl restart docker

Enter fullscreen mode Exit fullscreen mode

Step-7: Setting up a network bridge to 1 :

echo "1" > /proc/sys/net/bridge/bridge-nf-call-iptable

Enter fullscreen mode Exit fullscreen mode

Step-8: Copy-paste the token generated in the master node….

Finally, in the master node :

kubectl get nodes

Enter fullscreen mode Exit fullscreen mode

You will see that all the nodes are connected and are ready :

Image description

Join us

Register for Kubernetes Community Days Chennai 2022 at kcdchennai.in

Top comments (1)

Collapse
 
thomas550i profile image
thomas550i

i have master node in digital ocean and worker node in google cloud , nodes are connected and pods are in running status but when i run kubectl logs i'm getting timeout error (Error from server: Get "10.190.0.3:10250/containerLogs/def... dial tcp 10.190.0.3:10250: i/o timeout) , also i noticed log is calling with local ip (10.190.0.3) so there is no host in it , any one help on this ?