DEV Community

drevispas
drevispas

Posted on

Kubernetes in Baremetals

Motivation

I have move to a division of my company that is quite different from the before workspace. First of all, I can't use any cloud service due to strong security policies. I was creating a REST API server with Flask and decided to deploy it on containers.

Change history

Dockerfile

At first, I wrote some Dockerfiles for Flask, MongoDB, Docker registry, NginX, Redis. It was good to deal with samll number of images like in my case and was satisfied that all things were in my control. However building and running several images serially was a bit annoying.

Docker Compose

My previous team used to deploy systems with Docker compose. So I tried to go with that and was able to combine related imanges with the Docker compose after some investigation. I was happy with the ways in that images were built and run by a single docker-compose.yaml file.
I could utilize the existing Dockerfiles to make the Docker compose to build images. docker-compose up -d started up the related containers in a shot. To stop all together, docker-compose stop made thigns simple. If you delete every containers and images, run docker-compose down. To delete even the used volumes, just run docker-compse down --volumes.
Now buding images and running/stopping containers are sufficient. next how should I secure high-availablity and scalability with that? Remind me, I can't use any cloud services. Services like ELB and auto-scaling was not for me. Kubernetes could have helped me.

KinD

KinD saved me at that time. I had no servers to test the Kubernetes in my new environment. KinD means "Kubernetes in Docker" and we can deploy multi-node Kubernetes even in a single machine. At first, configuring KinD was not that easy. The version was still beta and had changed during my test. Furthermore I had to rethink the host layout because the nodes was actually containers. Thus I configured Docker port-publishing to access the Kubernetes nodePort from the hosting machine.
I was quite satified with KinD. I wrote ConfigMaps, Secrets, Services, Deplyments, StatefulSets, and Ingress (of nginx) and run the resources successfully. I really liked Kustomize with that I can update the existing base YAMLs, combine all YAMLs, and even run all together with kubectl apply -k ./. But I was worried that these configurations are not real and would be much different from the production environment.

Multi-machine Kubernetes

As times passes, I was able to use serveral Linux machines at last and tried to configure multi-node Kubernetes in multiple machines. Consequently KinD is virtually identical to Kubernetes in multiple machines. That was faster to configure Kubernetes than doing for KinD. Plus it's more intuitive than KinD because control plane and work nodes are just hosts.

Baremetal-specific

Unfortunately Kubernetes is not a all-in-one solution. By design, they have reserved empty rooms for some specific implementations. For instance, Pod-to-Pod network, LoadBalancer type Service, Ingress, Persistent volume should be added by some means. If I used a cloud service, it would be not that difficult. In baremetal case, I had to configure plugin solutions for those.

Module required Solution Comment
Pod-to-Pod networking Flannel The defualt vxlan works fine.
Local persistence volume Local Persistence Volume Provisioner Good for performance but Pod and node are coupled.
Remote persistence volume NFS Provisioning A NFS server provides volumes dynamically (on demand volume). Intuitive and easy to view the volume contents
External load balancer MetalLB It make use LoadBalancer-type Service and assigns an external IP address for the Service. I choose a narrow range of IPs of my subnet.

Helm

Kustomize was convenient to use and easy to understand. Beyond that, Helm is a package manager for Kubernetes and gives us templating feature. Templating is a familiar concept with us if we have experienced Thymeleaf, Ansible, or Flask template. Plus we can manage the version of our Kubernetes packages with Helm.
The package metadata is in Chart.yaml, values for template variables are in the file values.yaml, and resource manaifests are located under a directory templates, Code snippets are in the file templates/_helper.tpl.
Fist time, I was confused how many parts be in whether values file or manifest files. It may depends on package own usages.
Finally I deployed my Flask and MongoDB with Helm and felt that managment was more superior than Kustomize.

Top comments (0)