DEV Community

Cover image for Installing Kubernetes with Kubespray on CenOS 7 Guide
kittisuw
kittisuw

Posted on • Updated on

Installing Kubernetes with Kubespray on CenOS 7 Guide

Installing Kubernetes with Kubespray on CenOS 7 Guide

Preparing ansible

1. Disable selinbux and firewalld

$ sudo -i
# setenforce 0
# sed -i "s/^SELINUX\=enforcing/SELINUX\=disabled/g" /etc/selinux/config
# systemctl disable firewalld; systemctl stop firewalld; systemctl mask firewalld
Enter fullscreen mode Exit fullscreen mode

2. Installation require package for git,ansible

# yum update
# yum install git
# yum install epel-release
# yum install python-pip
# pip install — upgrade pip
Enter fullscreen mode Exit fullscreen mode

3.Git clone the Kubespray repository and install requirement

# git clone https://github.com/kubernetes-sigs/kubespray.git
# cd kubespray
###Install requirements
# pip install -r requirements.txt
###Copy ``inventory/sample`` as ``inventory/mycluster``
# cp -rfp inventory/sample inventory/mycluster
Enter fullscreen mode Exit fullscreen mode

4. Genarate ssh key and copy to all vm that prepare to install K8s cluster

# ssh-keygen -t rsa
# ssh-copy-id -p 2324 admin@{ip of K8s node}
...
Enter fullscreen mode Exit fullscreen mode

5. Prepare file config hosts

# cd $HOME/kubespray/inventory/k8scluster
# cp inventory.ini hosts.ini
Enter fullscreen mode Exit fullscreen mode

hosts.ini

# cd $HOME/kubespray/inventory/k8scluster
# vi hosts.ini
---
# ## Configure 'ip' variable to bind kubernetes services on a
# ## different ip than the default iface
# ## We should set etcd_member_name for etcd cluster. The node that is not a etcd member do not need to set the value, or can set the empty string value.
[all]
k8sm901 ansible_host=10.233.247.64 ip=10.30.2.25
k8sm902 ansible_host=10.233.247.65 ip=10.30.2.26
k8sm903 ansible_host=10.233.247.66 ip=10.30.2.27
k8sw901 ansible_host=10.233.247.67 ip=10.30.2.28
k8sw902 ansible_host=10.233.247.68 ip=10.30.2.29
k8sw903 ansible_host=10.233.247.69 ip=10.30.2.30
k8sw904 ansible_host=10.233.247.61 ip=10.30.2.22
k8sw905 ansible_host=10.233.247.62 ip=10.30.2.23

# ## configure a bastion host if your nodes are not directly reachable
# bastion ansible_host=x.x.x.x ansible_user=some_user

[kube-master]
k8sm901
k8sm902
k8sm903

[etcd]
k8sm901
k8sm902
k8sm903

[kube-node]
k8sm901
k8sm902
k8sm903
k8sw901
k8sw902
k8sw903
k8sw904
k8sw905

[calico-rr]

[k8s-cluster:children]
kube-master
kube-node
calico-rr

[all:vars]
ansible_ssh_user=admin
ansible_ssh_port=2324
---
Enter fullscreen mode Exit fullscreen mode

6. Test ansible ping

# cd $HOME/kubespray
# ansible -i inventory/k8scluster/hosts.ini  -m ping all
k8sw901 | SUCCESS => {
    "changed": false,
    "ping": "pong"
}
k8sm902 | SUCCESS => {
    "changed": false,
    "ping": "pong"
}
k8sm903 | SUCCESS => {
    "changed": false,
    "ping": "pong"
}
k8sm901 | SUCCESS => {
    "changed": false,
    "ping": "pong"
}
k8sw903 | SUCCESS => {
    "changed": false,
    "ping": "pong"
}
k8sw904 | SUCCESS => {
    "changed": false,
    "ping": "pong"
}
k8sw905 | SUCCESS => {
    "changed": false,
    "ping": "pong"
}
k8sw906 | SUCCESS => {
    "changed": false,
    "ping": "pong"
}
k8sw902 | SUCCESS => {
    "changed": false,
    "ping": "pong"
}
Enter fullscreen mode Exit fullscreen mode

Preparing k8s nodes

7. Uninstall docker,k8s on K8s nodes and install jq

$ sudo -i
# docker rm `docker ps -a -q`
# docker rmi `docker images -q`
# kubeadm reset 
# yum remove kubeadm kubectl kubelet kubernetes-cni kube* -y
# yum autoremove 
# rm -rf ~/.kube
###Install jq
# (yum install epel-release -y; yum install jq -y)
Enter fullscreen mode Exit fullscreen mode

Setup K8s cluster via kubespray

8. setup

$ sudo -i
# cd $HOME/kubespray
ansible-playbook -i inventory/k8scluster/hosts.ini cluster.yml --become

#Download kube config file from one of master node to bastion vm
# ssh -p 2324 admin@k8sm901 'sudo cat /etc/kubernetes/admin.conf' >~/.kube/config

#Check all node status is Ready
# kubectl get node
NAME      STATUS   ROLES                  AGE     VERSION
k8sm901   Ready    control-plane,master   2d21h   v1.20.0
k8sm902   Ready    control-plane,master   2d21h   v1.20.0
k8sm903   Ready    control-plane,master   2d21h   v1.20.0
k8sw901   Ready    <none>                 2d21h   v1.20.0
k8sw902   Ready    <none>                 2d21h   v1.20.0
k8sw903   Ready    <none>                 2d21h   v1.20.0
k8sw904   Ready    <none>                 2d21h   v1.20.0
k8sw905   Ready    <none>                 2d21h   v1.20.0
k8sw906   Ready    <none>                 2d21h   v1.20.0
# kubectl get node -o wide
NAME      STATUS   ROLES                  AGE     VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION                CONTAINER-RUNTIME
k8sm901   Ready    control-plane,master   2d21h   v1.20.0   10.30.2.25    <none>        CentOS Linux 7 (Core)   3.10.0-1160.11.1.el7.x86_64   docker://19.3.14
k8sm902   Ready    control-plane,master   2d21h   v1.20.0   10.30.2.26    <none>        CentOS Linux 7 (Core)   3.10.0-1160.11.1.el7.x86_64   docker://19.3.14
k8sm903   Ready    control-plane,master   2d21h   v1.20.0   10.30.2.27    <none>        CentOS Linux 7 (Core)   3.10.0-1160.11.1.el7.x86_64   docker://19.3.14
k8sw901   Ready    <none>                 2d21h   v1.20.0   10.30.2.28    <none>        CentOS Linux 7 (Core)   3.10.0-1160.11.1.el7.x86_64   docker://19.3.14
k8sw902   Ready    <none>                 2d21h   v1.20.0   10.30.2.29    <none>        CentOS Linux 7 (Core)   3.10.0-1160.11.1.el7.x86_64   docker://19.3.14
k8sw903   Ready    <none>                 2d21h   v1.20.0   10.30.2.30    <none>        CentOS Linux 7 (Core)   3.10.0-1160.11.1.el7.x86_64   docker://19.3.14
k8sw904   Ready    <none>                 2d21h   v1.20.0   10.30.2.22    <none>        CentOS Linux 7 (Core)   3.10.0-1160.11.1.el7.x86_64   docker://19.3.14
k8sw905   Ready    <none>                 2d21h   v1.20.0   10.30.2.23    <none>        CentOS Linux 7 (Core)   3.10.0-1160.11.1.el7.x86_64   docker://19.3.14
k8sw906   Ready    <none>                 2d21h   v1.20.0   10.30.2.24    <none>        CentOS Linux 7 (Core)   3.10.0-1160.11.1.el7.x86_64   docker://19.3.14
Enter fullscreen mode Exit fullscreen mode

FAQS

Remove node from cluster

# kubectl get node
NAME      STATUS   ROLES                  AGE     VERSION
k8sm901   Ready    control-plane,master   2d22h   v1.20.0
k8sm902   Ready    control-plane,master   2d22h   v1.20.0
k8sm903   Ready    control-plane,master   2d22h   v1.20.0
k8sw901   Ready    <none>                 2d22h   v1.20.0
k8sw902   Ready    <none>                 2d22h   v1.20.0
k8sw903   Ready    <none>                 2d22h   v1.20.0
k8sw904   Ready    <none>                 2d22h   v1.20.0
k8sw905   Ready    <none>                 2d22h   v1.20.0
k8sw906   Ready    <none>                 2d22h   v1.20.0
# cd $HOME/kubespray
# ansible-playbook -i inventory/k8scluster/hosts.ini remove-node.yml -e "node=k8sw906" --become
$ kubectl get node
NAME      STATUS   ROLES                  AGE     VERSION
k8sm901   Ready    control-plane,master   2d23h   v1.20.0
k8sm902   Ready    control-plane,master   2d23h   v1.20.0
k8sm903   Ready    control-plane,master   2d23h   v1.20.0
k8sw901   Ready    <none>                 2d23h   v1.20.0
k8sw902   Ready    <none>                 2d23h   v1.20.0
k8sw903   Ready    <none>                 2d23h   v1.20.0
k8sw904   Ready    <none>                 2d23h   v1.20.0
k8sw905   Ready    <none>                 2d23h   v1.20.0
Enter fullscreen mode Exit fullscreen mode

Add new node to cluster

sudo -i
# vi $HOME/inventory/k8scluster/hosts.ini
---
# ## Configure 'ip' variable to bind kubernetes services on a
# ## different ip than the default iface
# ## We should set etcd_member_name for etcd cluster. The node that is not a etcd member do not need to set the value, or can set the empty string value.
[all]
k8sm901 ansible_host=10.233.247.64 ip=10.30.2.25
k8sm902 ansible_host=10.233.247.65 ip=10.30.2.26
k8sm903 ansible_host=10.233.247.66 ip=10.30.2.27
k8sw901 ansible_host=10.233.247.67 ip=10.30.2.28
k8sw902 ansible_host=10.233.247.68 ip=10.30.2.29
k8sw903 ansible_host=10.233.247.69 ip=10.30.2.30
k8sw904 ansible_host=10.233.247.61 ip=10.30.2.22
k8sw905 ansible_host=10.233.247.62 ip=10.30.2.23
*k8sw906 ansible_host=10.233.247.63 ip=10.30.2.24

# ## configure a bastion host if your nodes are not directly reachable
# bastion ansible_host=x.x.x.x ansible_user=some_user

[kube-master]
k8sm901
k8sm902
k8sm903

[etcd]
k8sm901
k8sm902
k8sm903

[kube-node]
k8sm901
k8sm902
k8sm903
k8sw901
k8sw902
k8sw903
k8sw904
k8sw905
*k8sw906 

[calico-rr]

[k8s-cluster:children]
kube-master
kube-node
calico-rr

[all:vars]
ansible_ssh_user=admin
ansible_ssh_port=2324
---
$ kubectl get node
NAME      STATUS   ROLES                  AGE     VERSION
k8sm901   Ready    control-plane,master   2d23h   v1.20.0
k8sm902   Ready    control-plane,master   2d23h   v1.20.0
k8sm903   Ready    control-plane,master   2d23h   v1.20.0
k8sw901   Ready    <none>                 2d23h   v1.20.0
k8sw902   Ready    <none>                 2d23h   v1.20.0
k8sw903   Ready    <none>                 2d23h   v1.20.0
k8sw904   Ready    <none>                 2d23h   v1.20.0
k8sw905   Ready    <none>                 2d23h   v1.20.0
$ cd $HOME/kubespray
$ ansible-playbook -i inventory/k8scluster/hosts.ini scale.yml --become
$ kubectl get node
NAME      STATUS   ROLES                  AGE   VERSION
k8sm901   Ready    control-plane,master   47m   v1.20.0
k8sm902   Ready    control-plane,master   46m   v1.20.0
k8sm903   Ready    control-plane,master   46m   v1.20.0
k8sw901   Ready    <none>                 45m   v1.20.0
k8sw902   Ready    <none>                 45m   v1.20.0
k8sw903   Ready    <none>                 45m   v1.20.0
k8sw904   Ready    <none>                 45m   v1.20.0
k8sw905   Ready    <none>                 45m   v1.20.0
k8sw906   Ready    <none>                 66s   v1.20.0
Enter fullscreen mode Exit fullscreen mode

Reset Cluster

# cd $HOME/kubespray
# ansible-playbook -i inventory/k8scluster/hosts.ini reset.yml --become
# kubectl get node
The connection to the server 10.30.2.25:6443 was refused - did you specify the right host or port?
#
Enter fullscreen mode Exit fullscreen mode

Upgrade Kubernetes version all node in Cluster

# kubectl get node
NAME      STATUS   ROLES                  AGE     VERSION
k8sm901   Ready    control-plane,master   4m53s   v1.20.0
k8sm902   Ready    control-plane,master   4m22s   v1.20.0
k8sm903   Ready    control-plane,master   4m12s   v1.20.0
k8sw901   Ready    <none>                 3m12s   v1.20.0
k8sw902   Ready    <none>                 3m12s   v1.20.0
k8sw903   Ready    <none>                 3m12s   v1.20.0
k8sw904   Ready    <none>                 3m12s   v1.20.0
k8sw905   Ready    <none>                 3m12s   v1.20.0
k8sw906   Ready    <none>                 3m4s    v1.20.0
#Edit kube_version
# vi $HOME/kubespray/inventory/k8scluster/group_vars/k8s-cluster/k8s-cluster.yml
...
kube_version: v1.20.0   #edit to v1.20.2
...
# cd $HOME/kubespray
# ansible-playbook -i inventory/k8scluster/hosts.ini upgrade-cluster.yml --become
# watch -x kubectl get node,pod -o wide
NAME           STATUS   ROLES                  AGE    VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE                K
ERNEL-VERSION                CONTAINER-RUNTIME
node/k8sm901   Ready    control-plane,master   100m   v1.20.2   10.30.2.25    <none>        CentOS Linux 7 (Core)   3
.10.0-1160.11.1.el7.x86_64   docker://19.3.14
node/k8sm902   Ready    control-plane,master   100m   v1.20.2   10.30.2.26    <none>        CentOS Linux 7 (Core)   3
.10.0-1160.11.1.el7.x86_64   docker://19.3.14
...
Enter fullscreen mode Exit fullscreen mode

Renew certificate

Because Client certificates generated by kubeadm expire after 1 year.

# vi $HOME/kubespray/inventory/k8scluster/group_vars/k8s-cluster/k8s-cluster.yml
...
force_certificate_regeneration: true
---
# cd $HOME/kubespray
# ansible-playbook -i inventory/k8scluster/hosts.ini cluster.yml --become
Enter fullscreen mode Exit fullscreen mode

Change container runtime

Because Kubernetes is only deprecating Docker as a container runtime after v1.20. They are currently only planning to remove Docker runtime support in the 1.22 release in late 2021(almost year!).

Example change from docker to containerd

##Edit 
# cd $HOME/kubespray/inventory/k8scluster/group_vars/k8s-cluster/
# cp k8s-cluster.yml k8s-cluster.yml.bk
# vi k8s-cluster.yml
...
container_manager: docker #Change from docker to containerd
...
# cd $HOME/kubespray/inventory/k8scluster/group_vars
# cp etcd.yml etcd.yaml.bk
# vi etcd.yml
...
etcd_deployment_type: docker #Change from docker to host
...
# cd $HOME/kubespray/inventory/k8scluster/group_vars/all
# cp containerd.yml containerd.yml.bk
## unbar config
# vi containerd.yml
...
---
# Please see roles/container-engine/containerd/defaults/main.yml for more configuration options

# Example: define registry mirror for docker hub

containerd_config:
  grpc:
    max_recv_message_size: 16777216
    max_send_message_size: 16777216
  debug:
    level: ""
  registries:
    "docker.io":
      - "https://mirror.gcr.io"
      - "https://registry-1.docker.io"
  max_container_log_line_size: -1
#   metrics:
#     address: ""
#     grpc_histogram: false
---
## Apply the new Container runtime
# cd $HOME/kubespray
# ansible-playbook -i inventory/k8scluster/hosts.ini cluster.yml --become
## Check container runtime
# kubectl get node -o wide
NAME      STATUS   ROLES                  AGE   VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION                CONTAINER-RUNTIME
k8sm901   Ready    control-plane,master   11h   v1.20.2   10.30.2.25    <none>        CentOS Linux 7 (Core)   3.10.0-1160.11.1.el7.x86_64   containerd://1.4.3
k8sm902   Ready    control-plane,master   11h   v1.20.2   10.30.2.26    <none>        CentOS Linux 7 (Core)   3.10.0-1160.11.1.el7.x86_64   containerd://1.4.3
k8sm903   Ready    control-plane,master   11h   v1.20.2   10.30.2.27    <none>        CentOS Linux 7 (Core)   3.10.0-1160.11.1.el7.x86_64   containerd://1.4.3
k8sw901   Ready    <none>                 11h   v1.20.2   10.30.2.28    <none>        CentOS Linux 7 (Core)   3.10.0-1160.11.1.el7.x86_64   containerd://1.4.3
k8sw902   Ready    <none>                 11h   v1.20.2   10.30.2.29    <none>        CentOS Linux 7 (Core)   3.10.0-1160.11.1.el7.x86_64   containerd://1.4.3
k8sw903   Ready    <none>                 11h   v1.20.2   10.30.2.30    <none>        CentOS Linux 7 (Core)   3.10.0-1160.11.1.el7.x86_64   containerd://1.4.3
k8sw904   Ready    <none>                 11h   v1.20.2   10.30.2.22    <none>        CentOS Linux 7 (Core)   3.10.0-1160.11.1.el7.x86_64   containerd://1.4.3
k8sw905   Ready    <none>                 11h   v1.20.2   10.30.2.23    <none>        CentOS Linux 7 (Core)   3.10.0-1160.11.1.el7.x86_64   containerd://1.4.3
k8sw906   Ready    <none>                 11h   v1.20.2   10.30.2.24    <none>        CentOS Linux 7 (Core)   3.10.0-1160.11.1.el7.x86_64   containerd://1.4.3
Enter fullscreen mode Exit fullscreen mode

https://github.com/kubernetes-sigs/kubespray/blob/master/docs/containerd.md

Top comments (0)