DEV Community

Cover image for Up and running with K0s and Portainer
Michael Chenetz
Michael Chenetz

Posted on

Up and running with K0s and Portainer

Prerequisites:

  1. Physical server
  2. Ubuntu Linux x 4
  3. network

Intro:

In this tutorial I will explain how to get started with K0s and Portainer. K0s is a great orchestrator for installing Kubernetes and makes it so that Kubernetes can be installed the same way many times. Portainer is a platform that runs on and can interact with most container platforms and allows you to manage your apps and micro-services.

In this tutorial we will be using Portainer BE which is free for up to 5 nodes. The BE edition has some great functionality around RBAC, Edge, and Cloud provisioning of Kubernetes clusters.

This intro does not cover the installation of Ubuntu onto a server and assumes that has already been done.

Installing K0s:

K0s Digram
The first thing that you need to know is that K0S is a distributed install. You use the k0sctl tool to connect over ssh to your hosts and it will provision everything for you.

To start, install the k0sctl binary on the system you want to orchestrate from.

on Mac:

brew install k0sproject/tap/k0sctl
Enter fullscreen mode Exit fullscreen mode

On Windows:

choco install k0sctl
Enter fullscreen mode Exit fullscreen mode

On Linux/Bash:

k0sctl completion > /etc/bash_completion.d/k0sctl
Enter fullscreen mode Exit fullscreen mode

On Linux/Zsh:

k0sctl completion > /usr/local/share/zsh/site-functions/_k0sctl
Enter fullscreen mode Exit fullscreen mode

generating and deploying an SSH key

on the host that you are orchestrating from, do the following to create the key

ssh-keygen -t ed25519
Enter fullscreen mode Exit fullscreen mode

Just hit enter until everything all prompts are gone and the key is created

to deploy the key do the following. The user should be whatever user you created on the ubuntu host and the ip should be the ip of each ubuntu host:

ssh-copy-id -i ~/.ssh/id_ed25519 user@host
Enter fullscreen mode Exit fullscreen mode

Configuring K0s:

K0s needs a yaml file that holds the configuration of your cluster. K0s will actually create this yaml file for you in order to start out with. To do this, run the following command in a new directory:

k0sctl init > k0sctl.yaml
Enter fullscreen mode Exit fullscreen mode

Once you have the file then edit it with the editor of your choice. I tend to choose vscode. Below is an example of a 4 node cluster with one controller and three workers. Each host has an ip address, a user name, the path of the ssh key(on the orchestrating machine) to authenticate to the host on connection, and the role of that host. The nice part about this is that once this is configured you can run it over and over.

apiVersion: k0sctl.k0sproject.io/v1beta1
kind: Cluster
metadata:
  name: k0s-cluster
spec:
  hosts:
  - ssh:
      address: 192.168.10.1
      user: user
      port: 22
      keyPath: /Users/user/.ssh/id_ed25519
    role: controller
  - ssh:
      address: 192.168.10.2
      user: user
      port: 22
      keyPath: /Users/user/.ssh/id_ed25519
    role: worker
  - ssh:
      address: 192.168.10.3
      user: user
      port: 22
      keyPath: /Users/user/.ssh/id_ed25519
    role: worker
  - ssh:
      address: 192.168.10.4
      user: user
      port: 22
      keyPath: /Users/user/.ssh/id_ed25519
    role: worker
  k0s:
    version: 1.24.2+k0s.0
    dynamicConfig: false
Enter fullscreen mode Exit fullscreen mode

Once you have everything that you need configured then run the following command to bring everything up:

Mac/Linux:

k0sctl apply --config ./k0sctl.yaml
Enter fullscreen mode Exit fullscreen mode

Windows:

k0sctl apply --config k0sctl.yaml
Enter fullscreen mode Exit fullscreen mode

You will get a bunch of output from that command that looks like the following:

⠀⣿⣿⡇⠀⠀⢀⣴⣾⣿⠟⠁⢸⣿⣿⣿⣿⣿⣿⣿⡿⠛⠁⠀⢸⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⠀█████████ █████████ ███
⠀⣿⣿⡇⣠⣶⣿⡿⠋⠀⠀⠀⢸⣿⡇⠀⠀⠀⣠⠀⠀⢀⣠⡆⢸⣿⣿⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀███          ███    ███
⠀⣿⣿⣿⣿⣟⠋⠀⠀⠀⠀⠀⢸⣿⡇⠀⢰⣾⣿⠀⠀⣿⣿⡇⢸⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⠀███          ███    ███
⠀⣿⣿⡏⠻⣿⣷⣤⡀⠀⠀⠀⠸⠛⠁⠀⠸⠋⠁⠀⠀⣿⣿⡇⠈⠉⠉⠉⠉⠉⠉⠉⠉⢹⣿⣿⠀███          ███    ███
⠀⣿⣿⡇⠀⠀⠙⢿⣿⣦⣀⠀⠀⠀⣠⣶⣶⣶⣶⣶⣶⣿⣿⡇⢰⣶⣶⣶⣶⣶⣶⣶⣶⣾⣿⣿⠀█████████    ███    ██████████

INFO k0sctl 0.0.0 Copyright 2021, Mirantis Inc.
INFO Anonymized telemetry will be sent to Mirantis.
INFO By continuing to use k0sctl you agree to these terms:
INFO https://k0sproject.io/licenses/eula
INFO ==> Running phase: Connect to hosts
INFO [ssh] 10.0.0.1:22: connected
INFO [ssh] 10.0.0.2:22: connected
INFO ==> Running phase: Detect host operating systems
INFO [ssh] 10.0.0.1:22: is running Ubuntu 20.10
INFO [ssh] 10.0.0.2:22: is running Ubuntu 20.10
INFO ==> Running phase: Prepare hosts
INFO [ssh] 10.0.0.1:22: installing kubectl
INFO ==> Running phase: Gather host facts
INFO [ssh] 10.0.0.1:22: discovered 10.12.18.133 as private address
INFO ==> Running phase: Validate hosts
INFO ==> Running phase: Gather k0s facts
INFO ==> Running phase: Download K0s on the hosts
INFO [ssh] 10.0.0.2:22: downloading k0s 0.11.0
INFO [ssh] 10.0.0.1:22: downloading k0s 0.11.0
INFO ==> Running phase: Configure K0s
WARN [ssh] 10.0.0.1:22: generating default configuration
INFO [ssh] 10.0.0.1:22: validating configuration
INFO [ssh] 10.0.0.1:22: configuration was changed
INFO ==> Running phase: Initialize K0s Cluster
INFO [ssh] 10.0.0.1:22: installing k0s controller
INFO [ssh] 10.0.0.1:22: waiting for the k0s service to start
INFO [ssh] 10.0.0.1:22: waiting for kubernetes api to respond
INFO ==> Running phase: Install workers
INFO [ssh] 10.0.0.1:22: generating token
INFO [ssh] 10.0.0.2:22: writing join token
INFO [ssh] 10.0.0.2:22: installing k0s worker
INFO [ssh] 10.0.0.2:22: starting service
INFO [ssh] 10.0.0.2:22: waiting for node to become ready
INFO ==> Running phase: Disconnect from hosts
INFO ==> Finished in 2m2s
INFO k0s cluster version 0.11.0 is now installed
INFO Tip: To access the cluster you can now fetch the admin kubeconfig using:
INFO      k0sctl kubeconfig
Enter fullscreen mode Exit fullscreen mode

The final output lets you know to run the command k0sctl kubeconfig which will generate the kubeconfig file that is needed to run kubectl (Kubernetes) commands.

To get the kubeconfig file run:

k0sctl kubeconfig > kubeconfig
Enter fullscreen mode Exit fullscreen mode

To test the cluster run:

kubectl get pods --kubeconfig kubeconfig -A
Enter fullscreen mode Exit fullscreen mode

You should see:

NAMESPACE     NAME                                       READY   STATUS    RESTARTS   AGE
kube-system   calico-kube-controllers-5f6546844f-w8x27   1/1     Running   0          3m50s
kube-system   calico-node-vd7lx                          1/1     Running   0          3m44s
kube-system   coredns-5c98d7d4d8-tmrwv                   1/1     Running   0          4m10s
kube-system   konnectivity-agent-d9xv2                   1/1     Running   0          3m31s
kube-system   kube-proxy-xp9r9                           1/1     Running   0          4m4s
kube-system   metrics-server-6fbcd86f7b-5frtn            1/1     Running   0          3m51s
Enter fullscreen mode Exit fullscreen mode

K0s Issue:
There is an issue in some releases of K0s where certain directories were not in the expected locations for Kubernetes. Before you bang your head on the wall here is a fix. SSH to your worker nodes and run the following:

sudo ln -s /var/lib/k0s/kubelet /var/lib/
Enter fullscreen mode Exit fullscreen mode

This gives you your base linux install but you still need a couple of other things. The first things is a load balancer. I typically use MetalLB. To install MetalLB run the following:

kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.10.2/manifests/namespace.yaml
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.10.2/manifests/metallb.yaml
Enter fullscreen mode Exit fullscreen mode

To configure metalLB then edit the following with your network info. The network should correspond to free addresses on your subnet that you want to allocate when exposing containers externally through the load-balancer. Place the contents into your editor and save it as, "metallb.yaml"

metallb.yaml

apiVersion: v1
kind: ConfigMap
metadata:
  namespace: metallb-system
  name: config
data:
  config: |
    address-pools:
    - name: default
      protocol: layer2
      addresses:
      - 192.168.10.240-192.168.10.250
Enter fullscreen mode Exit fullscreen mode

To apply the configuration run:

kubectl apply -f ./metallb.yaml
Enter fullscreen mode Exit fullscreen mode

The last thing i like to add is an ingress controller. I typically use Nginx for this. An ingress controller matches URL patterns to kubernetes services in order to save on using IP addresses for everything. You can create a single load balancer and then map DNS to that ip. The ingress will look at the referring DNS and match it. If there is a match it will follow the rules defined to connect the DNS match with a service.

To add ingress do the following:

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.1.3/deploy/static/provider/baremetal/deploy.yaml
Enter fullscreen mode Exit fullscreen mode

You can have multiple ingress classes in your cluster. It's good to define a default. In order to do that run:

kubectl -n ingress-nginx annotate ingressclasses nginx ingressclass.kubernetes.io/is-default-class="true"
Enter fullscreen mode Exit fullscreen mode

Lastly you need to change the default behavior of Nginx that would expose a nodeport to change to a LoadBalancer.

kubectl edit service ingress-nginx-controller -n ingress-nginx
Enter fullscreen mode Exit fullscreen mode

Storage

This is a little outside the scope of the install but I will add it here because it could be useful to some.

A storage class is needed in Kubernetes in order to have data persistence. There are many ways to accomplish this. Some like to use a Kubernetes cluster as a storage class by using something like longhorn.io or Ceph. I have a NFS storage device that i setup on TrueNAS that i use for this purpose. The following instrutcions will setup a CSI (Container Storage Interface) with NFS:

Install NFS on the cluster:

helm repo add csi-driver-nfs https://raw.githubusercontent.com/kubernetes-csi/csi-driver-nfs/master/charts
helm install csi-driver-nfs csi-driver-nfs/csi-driver-nfs --namespace kube-system --version v4.1.0
Enter fullscreen mode Exit fullscreen mode

apply storage class configuration. The only things that really need to be changed are the IP address of the NFS server and the share on that server:

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: nfs-csi
provisioner: nfs.csi.k8s.io
parameters:
  server: 192.168.10.10
  share: /mnt/default/code/
  # csi.storage.k8s.io/provisioner-secret is only needed for providing mountOptions in DeleteVolume
  # csi.storage.k8s.io/provisioner-secret-name: "mount-options"
  # csi.storage.k8s.io/provisioner-secret-namespace: "default"
reclaimPolicy: Delete
volumeBindingMode: Immediate
mountOptions:
  - nconnect=8  # only supported on linux kernel version >= 5.3
  - nfsvers=4.1
Enter fullscreen mode Exit fullscreen mode

Lastly, we want to make that storage class default for the cluster:

kubectl patch storageclass nfs-csi -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
Enter fullscreen mode Exit fullscreen mode

Notes:

One of the really nice features of K0s is automation. I haven't done it yet but you can automate the above extra steps by using the manifest deployer and helm deployer. Information is available below:

Manifest Deployer:
https://docs.k0sproject.io/v1.24.3+k0s.0/manifests/

Helm Deployer:
https://docs.k0sproject.io/v1.24.3+k0s.0/helm-charts/

Portainer Install:

Requirements:

  1. Kubernetes cluster
  2. Data persistent storage

Install:

helm repo add portainer https://portainer.github.io/k8s/
helm repo update
Enter fullscreen mode Exit fullscreen mode

Installing via load-balancer is the easiest:

helm install --create-namespace -n portainer portainer portainer/portainer \
    --set service.type=LoadBalancer \
    --set enterpriseEdition.enabled=true \
    --set tls.force=true
Enter fullscreen mode Exit fullscreen mode

If you want to get a little more advanced and have setup ingress and DNS and know the ingress load-balancer ip then run the following. Realize that there are many pre-requisites to get this working. Here is some info on Ingress (https://kubernetes.io/docs/concepts/services-networking/ingress/):

helm install --create-namespace -n portainer portainer portainer/portainer \
    --set enterpriseEdition.enabled=true \
    --set service.type=ClusterIP \
    --set tls.force=true \
    --set ingress.enabled=true \
    --set ingress.ingressClassName=<ingressClassName (eg: nginx)> \
    --set ingress.annotations."nginx\.ingress\.kubernetes\.io/backend-protocol"=HTTPS \
    --set ingress.hosts[0].host=<fqdn (eg: portainer.example.io)> \
    --set ingress.hosts[0].paths[0].path="/"
Enter fullscreen mode Exit fullscreen mode

Logging in:

Load Balancer installed:

https://<loadbalancer IP>:9443/ or http://<loadbalancer IP>:9000/
Enter fullscreen mode Exit fullscreen mode

Ingress installed:

https://<FQDN>/
Enter fullscreen mode Exit fullscreen mode

Creating the first user:
Image description

Add your free license key:
If you have installed Portainer Business Edition, you will now be asked to provide your license key. You will have been provided this when signing up for Business Edition or the free trial. If you don't have a license key, you can either click the Don't have a license? link or get in touch with our team.
Paste the license key you were provided into the box and click Submit.

Image description

Import Kubernetes cluster:
Image description

Select the Kubernetes option and click Start Wizard. Then select the Import option.

Enter a name for cluster then click Select a file to browse for your kubeconfig file.

Image description

When you're ready, click the Connect button. If you have other environments to configure click Next to proceed, otherwise click Close to return to the list of environments where you will see the progress of your provision.

From here you can follow the docs to get everything else setup:
Portainer Docs

Check our more things you can do with portainer at (https://www.portainer.io/blog)

Get your 5 nodes free license at (https://www.portainer.io/pricing/take5)

Top comments (0)