DEV Community

Cover image for Installing K8 on ARM64 [4 cpu, 24Gb RAM]
Rahul Kiran Gaddam
Rahul Kiran Gaddam

Posted on • Updated on

Installing K8 on ARM64 [4 cpu, 24Gb RAM]

Philosophy

  • Kubernetes/K8 has solved biggest problem of Infrastructure.
  • Unfortunately to work with it we require lot of infrastructure [Static IP, Hardware, Domain Name].
  • There are lot of alternatives that will help us to explore it like Play with Kubernetes, Katacoda. There are always something [Persistence, Availability] that is missing.
  • In this article we will explore how to create a K8 Single Node Cluster and explore K8. This document is based on inspiration from article Medium K8 Installation

Overview

OCI

  • Oracle is revolutionizing Cloud for Industries. Oracle is only SaaS company in the market that provide all offering of cloud [IaaS, PaaS, SaaS]
  • Majority of cloud offering are giving minimum free kits to explore.
  • Oracle has crossed this barrier by providing free offering of Compute, Network, Load Balancer, Autonomous Database for all under strategy of Always Free Resources. image

Installation

  • Using OCI free tire we will create k8 Single node cluster with 24GB & 4 OCPU.
  • For this installation, I have considered below. I tried to create two nodes, networking between nodes I was not able to solve.

    • Instance Name : K8-Master image
    • Image: Oracle Linux Cloud Developer 8 image
    • Processor: Amper Arm64 Bit Processor image image
  • This will create a VM with Public IP. We have to be careful while we selecting container/deliverable to run on this VM.

    • In general deliverables are listed as linux-amd64 & darwin-amd64, we need to consider deliverables labeled as linux-arm64. image
  • Once VM is provisioned, its suggested to associate it with a domain as it simplifies access to K8 Cluster.

    • There are a lot of free domain providers. I have used No-ip image
  • Below are steps that we have followed to install K8

  # Login to Root
  sudo su

  # Updating Host File - Add entry
  ## Get CIDR Private IP
  ifconfig 

  vi /etc/hosts
  **<private.ip>** k8-master **<domain.name>**

  # Firewall Configuration
  systemctl disable firewalld
  yum install iptables-services -y
  systemctl start iptables
  systemctl enable iptables
  iptables -F
  iptables -P INPUT ACCEPT
  iptables -P OUTPUT ACCEPT
  service iptables save
  systemctl restart iptables
  iptables -L -n

  # Docker Installation
  ## Podman is by default provided, K8 can run on Podman
  ## I was unable to install using Podman and need to move to docker

  # -- Remove Podman
  yum remove podman buildah  -y

  # -- Install Docker
  sudo yum install -y yum-utils
  sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
  yum install -y docker-ce

  # -- Configure Docker
  systemctl  stop docker
  /usr/sbin/usermod -a -G docker opc
  /usr/sbin/sysctl net.ipv4.conf.all.forwarding=1
  systemctl  start docker
  chmod 777 /var/run/docker.sock
  swapoff -a
  sed -i '/ swap / s/^/#/' /etc/fstab
  vi /etc/docker/daemon.json
  {
      "exec-opts": ["native.cgroupdriver=systemd"]
  }

  # Install K8 Software

  # -- Pre configurations
  cat <<EOF |  tee /etc/modules-load.d/k8s.conf
  br_netfilter
  EOF

  cat <<EOF |  tee /etc/sysctl.d/k8s.conf
  net.bridge.bridge-nf-call-ip6tables = 1
  net.bridge.bridge-nf-call-iptables = 1
  EOF

  sysctl --system

  cat <<EOF |  tee /etc/yum.repos.d/kubernetes.repo
  [kubernetes]
  name=Kubernetes
  baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-\$basearch
  enabled=1
  gpgcheck=1
  repo_gpgcheck=1
  gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
  exclude=kubelet kubeadm kubectl
  EOF

  setenforce 0
  sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config

  # -- Download
  yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
  systemctl enable --now kubelet

  # -- Validate
  kubectl version --short
  kubeadm version --short

  # -- Creating OS Services
  systemctl enable docker.service
  systemctl enable kubelet.service
  systemctl daemon-reload
  systemctl restart docker
  systemctl restart kubelet

  # -- Installing K8 Single Node Cluster
  CERTKEY=$(kubeadm certs certificate-key)
  kubeadm init --apiserver-cert-extra-sans=<domain.name>,<public.ip>,<private.ip> --pod-network-cidr=10.32.0.0/12   --control-plane-endpoint=<domain.name> --upload-certs --certificate-key=$CERTKEY

  # -- Moving k8 config file  
  mkdir -p $HOME/.kube
  cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  chown $(id -u):$(id -g) $HOME/.kube/config
  mkdir -p /home/opc/.kube
  cp $HOME/.kube/config /home/opc/.kube/config
  chmod 777 /home/opc/.kube/config

  # -- Validating Installation
  netstat -nplt
  kubectl get nodes
  kubectl get pods -n kube-system

  # -- Enabling Flannel Networking
  kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
Enter fullscreen mode Exit fullscreen mode

Ingress

  • With a successful K8 environment installation, we wanted to run pods and access them using DNS name associated.
  • Ingress controller helps to do this. We will associate ingress to two Pods.
# Taint Master
## This will allow pods to be scheduled on Master
kubectl get nodes -o json | jq '.items[].spec.taints'
kubectl taint nodes k8-master node-role.kubernetes.io/master:NoSchedule- 

# Install Helm
curl https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 | bash
mv /usr/local/bin/helm /usr/bin

# -- Validating Helm Installation
helm version

# -- Add Helm Repo
helm repo add stable https://charts.helm.sh/stable
helm repo list

# Install Nginx Ingress Controller

# -- Add Helm Chart as default is Depricated
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update
helm repo list

# -- Download default chart
helm show values ingress-nginx/ingress-nginx > ngingress-metal-custom.yaml
chmod 777 ngingress-metal-custom.yaml

# -- Update Settings to run Nginx on OCI
hostNetwork: true ## change to false

hostPort:
  enabled: false ## change to true

kind: Deployment ## change to DaemonSet

externalIPS:
- public.ip ## replace with your instance's Public IP

loadBalancerSourceRanges:
- public.ip/32 ## replace with your instance's Public IP

# -- Run Chart
kubectl create ns ingress-nginx
helm install helm-ngingress ingress-nginx/ingress-nginx -n ingress-nginx --values ngingress-metal-custom.yaml

# -- Verification
kubectl get all -n ingress-nginx
helm list -n ingress-nginx
Enter fullscreen mode Exit fullscreen mode
  • Connecting Service to an Ingress
# -- This will create Deployment, ClusterIP Service, Ingress
kubectl apply -f https://raw.githubusercontent.com/rahgadda/Kubernetes/master/MyDev/helloworld-ingress.yaml

# -- Verify Ingress 
kubectl get ing
Enter fullscreen mode Exit fullscreen mode
  • On accessing http://<public.ip>, http://<domain.name> system will display Hello, World! image

Dashboard

  • K8 team has created k8 dashboard to view insights on Kubernetes.
  • Typically it is accessed using kube proxy or node port. We will deploy it and access it using Ingress.
# -- Install Dashboard
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.3.1/aio/deploy/recommended.yaml

# -- Verify Dashboard 
kubectl get svc -n kubernetes-dashboard
kubectl get pods -n kubernetes-dashboard

# -- Create Service Account to Access Dashboard
kubectl create serviceaccount rahgadda -n default
kubectl create clusterrolebinding dashboard-admin -n default --clusterrole=cluster-admin --serviceaccount=default:rahgadda
kubectl create clusterrolebinding user-cluster-admin-binding --clusterrole=cluster-admin --user=default

# -- Create Config file to Login
server=https://<domain.name>:6443
name=$(kubectl get serviceaccount rahgadda -n default -o jsonpath="{.secrets[0].name}")
ca=$(kubectl get secret/$name -o jsonpath='{.data.ca\.crt}')
token=$(kubectl get secret/$name -o jsonpath='{.data.token}' | base64 --decode)
namespace=$(kubectl get secret/$name -o jsonpath='{.data.namespace}' | base64 --decode)

echo "
apiVersion: v1
kind: Config
clusters:
- name: default-cluster
  cluster:
    certificate-authority-data: ${ca}
    server: ${server}
contexts:
- name: default-context
  context:
    cluster: default-cluster
    namespace: default
    user: default-user
current-context: default-context
users:
- name: default-user
  user:
    token: ${token}
" > rahgadda-kubeconfig.yaml

# -- Use rahgadda-kubeconfig.yaml file to login to Dashboard

# -- Create Ingress for Dashboard Service
kubectl apply -f https://raw.githubusercontent.com/rahgadda/Kubernetes/master/MyDev/k8-dashboard-ingress.yaml

# -- Dashboard will be available at URL https://<domain.name>/dashboard/
Enter fullscreen mode Exit fullscreen mode

image

Top comments (1)

Collapse
 
rahgadda profile image
Rahul Kiran Gaddam

From 1.20 Docker support is deprecated. This will not cause any failure in above installation but pod communication will not work. To support it follow link stackoverflow.com/questions/720483...