DEV Community

Cover image for Intro: k3s, a less needy Kubernetes,
David J Eddy
David J Eddy

Posted on

Intro: k3s, a less needy Kubernetes,

What is all this?

The official documentation states "...k3s is a intended to be a fully compliant production-grade Kubernetes distribution with [the following] changes...". But what does that mean in layman's terms? It means Kubernetes has a lot of functionality that is not 100% required in the use case of IoT, Edge computing, or lower powered hardware. For example the kubeadm has some high hardware requirements. and runs very slowly on lower power / high latency hardware. Taking this further running k8s worker nodes on an ARM chip is a practice in frustration. Enter k3s, lower hardware is no longer a hard barrier for entry.

Up and Running

Installation is very straight forward. I would say almost as easy as apt-get install. Since k3s is aimed at Edge and IoT Debian is rarely the chosen OS. So instead it uses curl.

curl -sfL https://get.k3s.io | sh -

Execute the above in a terminal, the output should look similar to the following.

[INFO]  Finding latest release
[INFO]  Using v0.7.0 as release
[INFO]  Downloading hash https://github.com/rancher/k3s/releases/download/v0.7.0/sha256sum-amd64.txt
[INFO]  Downloading binary https://github.com/rancher/k3s/releases/download/v0.7.0/k3s
[INFO]  Verifying binary download
[INFO]  Installing k3s to /usr/local/bin/k3s
[INFO]  Skipping /usr/local/bin/kubectl symlink to k3s, command exists in PATH at /usr/bin/kubectl
[INFO]  Creating /usr/local/bin/crictl symlink to k3s
[INFO]  Skipping /usr/local/bin/ctr symlink to k3s, command exists in PATH at /usr/bin/ctr
[INFO]  Creating killall script /usr/local/bin/k3s-killall.sh
[INFO]  Creating uninstall script /usr/local/bin/k3s-uninstall.sh
[INFO]  env: Creating environment file /etc/systemd/system/k3s.service.env
[INFO]  systemd: Creating service file /etc/systemd/system/k3s.service
[INFO]  systemd: Enabling k3s unit
Created symlink /etc/systemd/system/multi-user.target.wants/k3s.service → /etc/systemd/system/k3s.service.
[INFO]  systemd: Starting k3s

Running ps aux | grep k3s returns our query with confirmation that k3s is indeed running.

Photo by Joey Kyber on Unsplash

Subsequent start ups of the master node is accomplished via an event shorter command.

echo 'Run the k3s server...'
sudo k3s server &
# Kubeconfig is written to /etc/rancher/k3s/k3s.yaml
sudo k3s kubectl get node

If the Rancher team was going for shortest commands possible, they win.

Now on to worker nodes. To join a worker node to a cluster it is also nearly as easy.

echo 'Run a k3s worker node...'
# On a worker node, run the following.
# NODE_TOKEN is on the server @ /var/lib/rancher/k3s/server/node-token
sudo k3s agent --server https://master_node:6443 --token ${NODE_TOKEN}

That is it. Three different commands to get a master and worker nodes installed, running, and joined together. Who said Kubernetes was confusing. (I am joking., k8s can be very confusing.)

Any Example Usage Case

As I wrote this posting an colleague of mine reached out to me asking how I would put together a system that could monitor the status of and receive data from a wide range of in-situ IoT devices for agriculture. The system needs to be able to tell what devices are online, the health of the system as well as receive data streams from the system. Once the data ingested and analyzed alerts and reports would be generated. The in-situ devices would be as lower-powered and very isolated, likely run off solar with a local battery pack. During the proof of concept phase Raspberry Pi 3's with custom enclosures would be the deployed IoT devices. Bam! Perfect case for k3s. When the device starts the startup script would instruct the device to join the cluster via a master node running on a ARM server (probably a AWS EKS master really). Then machine and sensor data could be feed into Kinesis Firehose to analysis. Bam, done.

Photo by Zan Ilic on Unsplash

Wrap Up

I can hear you saying 'so soon'? Well...yes. k3s really is that easy. One command to install, one command to start, one command to join. I would even dare to say it is easier to operate than Docker.

What do you think. Does Kubernetes have a place with Edge and IoT devices; or is it overkill like technology engineers so often tend to do? Let me know in the comments below.

Additional Reading

Top comments (5)

Collapse
 
vladoportos profile image
VladoPortos

I have kubernete cluster made out of Rpi4s everything is fine, I just can't find information about where the pods are stored. What I mean, is nobody ever talks about the storage ( not storage that is mounted into the pods ) but lets say your SD card is 2Gb, the pods that run on the node can't be bigger than 2Gb logically, right ? So where to specify so the kubernete nodes/k3s use for example USB disk for the temporary FS for pods ?

Collapse
 
quinncuatro profile image
Henry Quinn

Functionally, the same API as K8s?

Collapse
 
david_j_eddy profile image
David J Eddy • Edited

"...fully compliant Kubernetes distribution..." - github.com/rancher/k3s

I take that to mean it meets the requirements set for by the K8S team to a drop in replacement. Like how MariaDB is a MySQL compliant replacement.

K3S does not have all the extra / fancy / additional / not 100% required to run a cluster extra parts.

Collapse
 
quinncuatro profile image
Henry Quinn

Roger, that's what I figured. Just setting up an RPi Kubernetes cluster next week and didn't know if I should be dumping time into K8s or K3s. Might have to try both!

Thread Thread
 
david_j_eddy profile image
David J Eddy

I'd be interested in your experience with k3s if you go that route.