As part of an effort to bring parity between environments, my team recently switched to using Kubernetes for development as well as production orchestration with the help of Skaffold.
While this reduces maintenance overhead and is a great idea on the whole, it also means that the application started outgrowing my laptop rather quickly. Luckily, we have an enterprise virtual machine provisioning environment where I decided to offload my development environment.
Now, it would've been easier to set up a single-node cluster with a large enough VM, but how basic would that have been? So, I went with a multi-node cluster. If you'd like to do the same, read on!
Prerequisites
A set of Ubuntu virtual machines. I used four machines with 16 GB RAM and 4 CPU cores each.
Pick one of these instances as the primary node and the rest will be considered secondary. Let's assume that the primary node has the IP address
10.0.0.1
.A shared disk between the nodes so persistent volumes can be shared across them. For the following, we'll assume there's a reasonably sized disk mounted at
/data
on all nodes.
Ports to expose
The primary node should have the following ports exposed:
25000
(so secondary nodes can join the cluster)32000
(so the container registry can be accessed by secondary nodes)16443
(if you want to usekubectl
from a remote machine)Others as needed
Software to install
Follow standard steps to install the following services/applications:
My VMs started off with Ubuntu Server 16.04, which I then upgraded to 18.04. This is not necessary, but important to note in case it changes some of the steps below.
Install MicroK8s
Let's start by installing MicroK8s on all nodes:
sudo snap install microk8s --classic --channel=1.18/stable
# check status
sudo microk8s status --wait-ready
No need to enable any addons for now as we'll be doing this in later steps.
Warning: You might run into an issue with NFS and snap not playing nicely. I was only able to run MicroK8s commands as root.
kubectl
MicroK8s comes with its own namespaced kubectl
that can be invoked with microk8s kubectl
. If you're used to working directly with kubectl
, this might start to get tedious. There are a few ways to get around this:
-
Connect existing
kubectl
to your MicroK8s instance by running
sudo microk8s kubectl config view --raw > $HOME/.kube/config
The same file can also be used to access the cluster from a remote machine as long as appropriate ports are accessible. See documentation for more information. I should point out that this is the option that I have tested, but the next two might also work.
Use a good old bash alias:
alias kubectl='microk8s kubectl'
Use a snap alias:
sudo snap alias microk8s.kubectl kubectl
Install and configure MicroK8s addons
First, enable some basic MicroK8s addons that we're going to need:
sudo microk8s enable dns ingress storage
If you need Helm support, be sure to add helm
to the list above.
Configure storage location
By default, the storage addon persists all volumes in /var/snap/microk8s/common/default-storage
. Since we're going to be sharing storage across various nodes, we need to update this to write to our /data
mounted disk instead. You can do this by editing the hostpath-provisioner
deployment:
sudo microk8s kubectl get -o yaml -n kube-system deploy hostpath-provisioner | \
sed 's~/var/snap/microk8s/common/default-storage~/data/snap/microk8s/common/default-storage~g' | \
sudo microk8s kubectl apply -f -
# restart microk8s for good measure
sudo microk8s stop && sudo microk8s start
Enable internal registry
When you have a multi-node cluster, the easiest way to share development images is to push them to a private registry. Skaffold knows how to push and pull from private registries, when needed, but we'll need to set one up. Luckily, MicroK8s makes it relatively easy:
sudo microk8s enable registry
This will start a registry on port 32000
that can be accessed by other nodes in the cluster via 10.0.0.1:32000
.
Working with an insecure registry
Without additional configuration, the registry started in the step above is insecure. If you're not comfortable with that, you could look into securing it. For the purposes of this tutorial, we will continue to use it as is, which still requires some – though less involved – changes.
First, we need to make sure that Docker won't have any trouble pushing to this registry just because it is insecure. This can be done by editing the Docker daemon configuration at /etc/docker/daemon.json
to add the following lines:
{
"insecure-registries" : ["10.0.0.1:32000"]
}
Keep in mind that this file might need to be created if it's not already present. Once this is taken care of, restart the daemon for changes to take effect:
sudo systemctl restart docker
Second, MicroK8s needs to be persuaded not to complain when pulling from this insecure registry. For this, find the file /var/snap/microk8s/current/args/containerd-template.toml
and under [plugins] -> [plugins.cri.registry] -> [plugins.cri.registry.mirrors]
, add:
[plugins.cri.registry.mirrors."10.0.0.1:32000"]
endpoint = ["http://10.0.0.1:32000"]
Restart MicroK8s:
sudo microk8s stop && sudo microk8s start
This needs to be done on all the nodes. See official instructions for more details.
Form the cluster
Finally, we're ready to form a cluster. Run the following on the master node:
microk8s add-node
You'll get back something similar to:
Join node with: microk8s join 10.0.0.1:25000/<some-token>
Copy the microk8s join...
command and run it on one of the secondary nodes. A new token will need to be generated for each secondary node that you wish to add to the cluster.
Now you can run kubectl get nodes
on the primary and see that all nodes have joined. That's it, you now have your own fully functioning Kubernetes cluster!
Use Skaffold for building and deployment
This part of the tutorial assumes some knowledge of Skaffold. If you aren't familiar with it, it's a very useful tool and I'd highly recommend checking it out. In my case, we're using Skaffold to simplify building Docker images and deploying our Helm charts during development.
With the above setup in place, we need to make some minor adjustments to our skaffold.yaml
files in order to make them work with the multi-node cluster and private registry. I will also assume that your code has already been cloned to a location on your primary node.
First, list your private registry under build.insecureRegistries
:
build:
insecureRegistries: ["10.0.0.1:32000"]
Second, prefix image names for all your artifacts under build
with your registry address like so:
build:
artifacts:
- image: 10.0.0.1:32000/my-image
The same goes for where you're using the image. This might vary depending on your own setup. As an example, if you're using Helm, you might end up with something similar to:
deploy:
helm:
releases:
- values:
imageName: 10.0.0.1:32000/my-image
Now you can build and deploy with skaffold dev
or skaffold run
as usual. Development will never be the same again!
Troubleshooting
Along the way, I encountered some issues that might be peculiar to my setup, but are worth mentioning.
Getting Forbidden: disallowed by cluster policy
error
I ran into this issue when trying to install the Elasticsearch chart. The workaround for this was to do the following on the primary node:
echo "--allow-privileged" | sudo tee -a /var/snap/microk8s/current/args/kube-apiserver
# restart microk8s for changes to take effect
sudo microk8s stop && sudo microk8s start
Unable to connect to the internet from pods
By default, the internal DNS server points to Google's DNS servers. If, for whatever reason, this doesn't work for you, you would need to update the CoreDNS configuration to allow your pods to access the internet.
kubectl edit -n kube-system configmaps coredns
# edit the configmap by replacing the line that starts with "forward" under data.Corefile with "forward . /etc/resolv.conf"
# if this doesn't work, you might try manually replacing Google's 8.8.8.8 and 8.8.4.4 servers with your own DNS servers
# save and close, the configuration should get reloaded
Next steps
Here are some improvements to this system you could explore that I have not covered here:
If VS Code is your IDE of choice, remote SSH development makes interacting with your remote code a breeze!
Configure Helm and kubectl clients on your local machines to connect to your remote cluster by exporting the kubeconfig file described earlier. This would allow you to store your code and interact with the cluster without leaving the comforts of your local machine.
You might even try offloading your builds to the remote Docker host instead of building locally
Hope you have fun with it!
Top comments (0)