Author: Vrukshali Torawane is Pursuing bachelors in Computer Science and Engineering. Also, she is an intern at data on kubernetes community. She Own the certifications of Automation with Ansible and Specialist in Containers and Kubernetes. She is a cloud and Devops enthusiast
CREATE A MULTI-CLOUD SETUP of K8S cluster:
- Launch node in AWS
- Launch node in Azure
- Launch node in GCP
- One node on the cloud should be Master Node
- Then set up multi-node Kubernetes cluster.
So to do this task, I have created three nodes :
Master node on AWS, Slave nodes on AWS, Azure, and GCP.
Letβs start:
First: Setting up Kubernetes master on AWS:
Step-1: For installing kubelet, kubeadm, kubectl first, we need to set up a repo for this :
vim /etc/yum.repos.d/k8s.repo
# content inside repo k8s.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-\$basearch
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
Step-2: Installing required software :
yum install docker kubelet kubeadm kubectl iproute-tc -y
Step-3: Starting and enabling services :
systemctl enable --now docker
systemctl enable --now kubelet
Step-4: We also need to pull docker images using kubeadm. It pulls images of the config files.
kubeadm config images pull
Step-5: Now, we need to change the docker cgroup driver into systemd
vim /etc/docker/daemon.json
{
"exec-opts": ["native.cgroupdriver=systemd"]
}
Step-6: Since we have made changes in docker, we need to restart the docker service :
systemctl restart docker
Step-7: Setting up a network bridge to 1 :
echo "1" > /proc/sys/net/bridge/bridge-nf-call-iptable
Step-8: The important step is while initializing Master, *while running preflight the main thing is we need to associate the token to the public IP of instance, so that any of the other nodes can easily connect, so for this use :
*
--control-plane-endpoint=<PUBLIC_IP>:6443
kubeadm init --pod-network-cidr=10.244.0.0/16 --control-plane-endpoint=<public_ip>:6443 --ignore-preflight-errors=NumCPU --ignore-preflight-errors=Mem
Step-9: Now, make a directory for Kube config files and give permission to them :
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Step-10: Apply flannel :
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
Step-11: Final step: Generate token so that slave nodes could connect to master node :
kubeadm token create --print-join-command
Second: Setting up Kubernetes nodes on AWS, Azure, GCP :
(Note: Follow same steps in all the three platforms)
ππ» AWS
ππ» Azure
ππ» GCP
Step-1: For installing kubelet, kubeadm, kubectl first, we need to set up a repo for this :
vim /etc/yum.repos.d/k8s.repo
# content inside repo k8s.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-\$basearch
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
Step-2: Installing required software :
yum install docker kubelet kubeadm kubectl iproute-tc -y
Step-3: Starting and enabling services :
systemctl enable --now docker
systemctl enable --now kubelet
Step-4: We also need to pull docker images using kubeadm. It pulls images of the config files :
kubeadm config images pull
Step-5: Now, we need to change the docker cgroupdriver into systemd :
vim /etc/docker/daemon.json
{
"exec-opts": ["native.cgroupdriver=systemd"]
}
Step-6: Since we have made changes in docker, we need to restart the docker service :
systemctl restart docker
Step-7: Setting up a network bridge to 1 :
echo "1" > /proc/sys/net/bridge/bridge-nf-call-iptable
Step-8: Copy-paste the token generated in the master nodeβ¦.
Finally, in the master node :
kubectl get nodes
You will see that all the nodes are connected and are ready :
Join us
Register for Kubernetes Community Days Chennai 2022 at kcdchennai.in
Top comments (1)
i have master node in digital ocean and worker node in google cloud , nodes are connected and pods are in running status but when i run kubectl logs i'm getting timeout error (Error from server: Get "10.190.0.3:10250/containerLogs/def... dial tcp 10.190.0.3:10250: i/o timeout) , also i noticed log is calling with local ip (10.190.0.3) so there is no host in it , any one help on this ?