DEV Community

Cover image for How to Deploy Kubernetes on Bare Metal
Aditi Bindal for NodeShift

Posted on

How to Deploy Kubernetes on Bare Metal

Kubernetes, often called "K8s", is an open-source tool that manages containerized applications such as Docker containers in the form of a cluster. It automates many heavy-lifting tasks, such as container deployment, scaling, descaling, load balancing, etc., with the help of its built-in features. Also, it makes applications easy to manage and deployable anywhere. With the increasing adoption of containers by organizations, Kubernetes has become a de facto standard in DevOps to operate efficienty with containerized apps.

In this article, we'll walk you through the process of deploying Kubernetes on Bare metal servers. We'll discuss choosing between bare metals and VM-based servers for this use case and then guide you through a step-by-step approach to deploying Kubernetes on a server. Please note that the deployment process of Kubernetes is more or less the same on either of the machines (bare metal or VM), so feel free to refer to this guide to deploy on either of them.

Why bare metal Kubernetes?

Bare metal Kubernetes eliminates the virtualization layer, allowing containers to run directly on physical servers. This approach maximizes resource efficiency by giving applications full access to hardware resources, such as CPU, memory, and storage, without the overhead of virtual machines. It delivers significant performance improvements, reducing network latency by up to three times, making it ideal for resource-intensive workloads like AI, big data, etc. Additionally, bare metal Kubernetes lowers costs by avoiding virtualization licensing fees and provides complete control over infrastructure configuration, ensuring better security, isolation, and tailored management for critical business applications.

hero-image

Prerequisites

For production (multi-node cluster),

  • Two or more Bare metal servers (such as the ones provided by NodeShift), with each node having at least:

    • CPUs - 4 cores
    • RAM - 8 GB
    • Storage - 50 GB SSD
  • Ubuntu 22.04

Note: The prerequisites for this are highly variable across use cases. For a large-scale deployment, one could use a high-end configuration.

Step-by-step process to deploy Kubernetes on bare metal

For this tutorial, we’ll use two CPU-powered Bare metal servers by NodeShift, which provides high-compute machines at a very affordable cost on a scale that meets GDPR, SOC2, and ISO27001 requirements. Also, it offers an intuitive and user-friendly interface, making it easier for beginners to get started with Cloud deployments. However, feel free to use any cloud provider you choose and follow the same steps for the rest of the tutorial.

Step 1: Setting up a NodeShift Account

Visit app.nodeshift.com and create an account by filling in basic details, or continue signing up with your Google/GitHub account.

If you already have an account, login straight to your dashboard.

Image-step1-1

Step 2: Request a Custom Quote

Bare metal nodes are not included by default in the Computes Nodes tab; instead, you can request these customized configurations using NodeShift's exclusive Custom Order feature.

After accessing your account, you should see a dashboard (see image), now:

1) Navigate to the menu on the left side.

2) Click on the Custom Order option.

Image-step2-1

3) Click on Start to start creating a bare metal server request.

Image-step2-2

Step 3: Select the configuration for the Required Nodes

1) The first option you see is the Region dropdown. This option lets you select the geographical region where you want your servers to reside (e.g., United States).

Image-step3-1

2) Next, you see the Reliability dropdown, where you can choose the uptime guarantee level you seek for the servers. We're going with "High Availability" as of now.

Image-step3-2

3) Select the time commitment, e.g., 24 hours, 12 months, etc. On the right, you'll also see an option to provide the number of nodes you need. For demonstration purposes, we'll need one server as the master node and at least one server as the worker node; hence, we are requesting two nodes. You may use any number of nodes as per your requirements.

Image-step3-3

4) Most importantly, select the correct specifications for your bare metal servers according to your workload requirements by sliding the bars for each option. For the scope of this tutorial, we'll be requesting two bare metal servers with 8vCPUs/64GB/1TB SSD each. You can also choose the bandwidth speed of your servers.

Image-step3-4

Step 4: Choose an Image and Place Request

Next, you’ll need to choose an image for your servers. We’ll select Ubuntu, as we have decided to deploy Kubernetes on Ubuntu 22.04 for this tutorial. You may also add any additional comment/information you want the team to know beforehand, e.g., we are specifying that we need "bare metal" servers (and not VM) to provide additional clarity.

If everything looks good, click Request Quote to submit the quote request.

Image-step4-1

Once the request is placed, the NodeShift team will get back to you within 24 hours with a quote. You will review the quote. If it looks good and we receive approval from your end, the team will deploy the requested resources into your account.

Step 5: Connect to the Compute Node with SSH

As soon as you receive the resources in your account, follow the steps outlined below to connect to the servers via SSH:

1) Open your local terminal and run the below SSH command:

(replace root with your username and paste the IP of your server in place of ip)

ssh root@ip
Enter fullscreen mode Exit fullscreen mode

2) In some cases, your terminal may take your consent before connecting. Enter ‘yes’.

3) A prompt will request a password. Type the server's password (that has been assigned to you), and you should be connected.

4) Use the same steps as above to connect with any server that you'll use during deployment.

Output:

Image-step5-1

Step 6: Setting up the environment

Before configuring our nodes, we need to install dependencies and perform pre-configuration steps to set up the infrastructure.

Kubernetes requires a container runtime to manage containerized workloads. containerd is one of the container runtimes officially supported by Kubernetes. Let's start with setting up containerd.

1) Install containerd

sudo apt install -y containerd
Enter fullscreen mode Exit fullscreen mode

Output:

Image-step6-1

2) Configure containerd to use SystemCgroup

a) Create a directory:

mkdir -p /etc/containerd
Enter fullscreen mode Exit fullscreen mode

b) Edit config.toml to allow Kubernetes to use systemd

sudo sed -i 's/ SystemdCgroup = false/ SystemdCgroup = true/' /etc/containerd/config.toml
Enter fullscreen mode Exit fullscreen mode

c) Restart containerd

systemctl restart containerd
Enter fullscreen mode Exit fullscreen mode

containerd is successfully configured; let's move on to the next steps.

3) Disable Swapping

Use the following commands one by one to disable swapping and to keep it disabled after reboot

sudo swapoff -a
sudo sed -i '/[[:space:]]swap[[:space:]]/ s/^/#/' /etc/fstab
Enter fullscreen mode Exit fullscreen mode

4) Update the Ubuntu package source-list

sudo apt-get update
Enter fullscreen mode Exit fullscreen mode

Output:

Image-step6-2

5) Install dependencies

sudo apt-get install -y apt-transport-https ca-certificates curl gpg socat docker.io
Enter fullscreen mode Exit fullscreen mode

Output:

Image-step6-3

  • apt-transport-https: Enables apt to fetch packages securely over HTTPS, required for accessing Kubernetes and related repositories.

  • ca-certificates: Provides trusted certificates to validate secure HTTPS connections for fetching packages.

  • curl: A command-line tool to download files or transfer data from URLs used to fetch Kubernetes binaries or repository keys.

  • gpg: Adds and verifies GPG keys for secure package signing, ensuring the authenticity of Kubernetes-related packages.

  • socat: A utility to establish bidirectional data transfers between processes or network sockets, critical for Kubernetes port forwarding and networking.

  • docker.io: Installs Docker as a container runtime (if containerd isn’t directly installed), used by Kubernetes to manage containers.

6) Use curl to add the GPG key for the Kubernetes repository

curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.30/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
Enter fullscreen mode Exit fullscreen mode

7) Add Kubernetes's APT repository to the system's source-list

echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.30/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list
Enter fullscreen mode Exit fullscreen mode

After this, update the package source-list again for changes to take effect.

8) Install tools required to run Kubernetes

sudo apt-get install -y kubelet kubeadm
Enter fullscreen mode Exit fullscreen mode

Output:

Image-step6-4

Confirm the installation:

kubelet --version
kubeadm version
Enter fullscreen mode Exit fullscreen mode

Output:

Image-step6-5

Also, install the kubectl binary using curl by running the following commands one by one:

curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl   # install kubectl
Enter fullscreen mode Exit fullscreen mode

Confirm the installation,

kubectl version --client
Enter fullscreen mode Exit fullscreen mode

Output:

Image-step6-6

  • kubelet: Runs on each node to manage containerized workloads and maintain the desired state.

  • kubeadm: Simplifies Kubernetes cluster setup by automating the initialization and configuration process.

  • kubectl: Provides a command-line interface to interact with and manage Kubernetes clusters and resources.

Step 7: Configure the master node

As mentioned earlier, we are using two bare metal servers, one of which will become a master node and the other a worker node. Now is the time to separate both servers according to their roles.

Let's configure our current server (where we've been performing all the above steps) as our master node, and later on, we'll set up our second server as the worker node.

1) Enable IP forwarding

We'll need to enable IP forwarding to allow container-to-container communications across nodes.

echo 1 | sudo tee /proc/sys/net/ipv4/ip_forward
Enter fullscreen mode Exit fullscreen mode

Output:

Image-step7-1

We'll edit /etc/sysctl.conf to keep the changes intact on reboot.

Open the file using Nano:

nano /etc/sysctl.conf
Enter fullscreen mode Exit fullscreen mode

Add/uncomment the line "net.ipv4.ip_forward=1"

Image-step7-2

Apply the changes,

sudo sysctl -p
Enter fullscreen mode Exit fullscreen mode

Output:

Image-step7-3

2) Initialize Kubernetes control-plane

Now, initialize the master node as a control-plane and set up a cluster with a pod network range.

sudo kubeadm config images pull
sudo kubeadm init
Enter fullscreen mode Exit fullscreen mode

Output:

Image-step7-4

Observe the last line of the output and note it, as we'll use it to connect the worker node to the cluster.

Image-step7-5

a) Set up /.kube/config:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Enter fullscreen mode Exit fullscreen mode

b) Change the hostname

This is an optional step, but it is recommended, especially if you have more than one worker node.

We'll change our current server's hostname to master-node.

sudo hostnamectl set-hostname master-node
Enter fullscreen mode Exit fullscreen mode

c) Install a network add-on for allowing communications between the pods in the cluster:

kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml
Enter fullscreen mode Exit fullscreen mode

Output:

Image-step7-6

To check if the node is ready as a control-plane,

sudo kubectl get nodes
Enter fullscreen mode Exit fullscreen mode

Output:

Image-step7-7

As shown above, you should see your master node in the status "ready" as a control-plane.

Step 8: Configure the worker node

Next, let's configure and set up our worker node. Note that you can have more than one worker node in the cluster; however, for the scope of this tutorial, we'll be setting up only one worker node.

a) Connect to the second server via SSH (refer to Step 5).

b) Change the hostname of this server to worker01.

(Note: If you have more than one worker node, assign a unique name to each one)

sudo hostnamectl set-hostname worker01
Enter fullscreen mode Exit fullscreen mode

The initial set-up of our worker-node is exactly the same as that of the master-node; the only difference is that this node will not be initialized as a master node. Hence, go to Step 6, repeat all the steps in the same way with the worker node, and repeat Sub-step 1 of Step 7.

Once done, proceed to the following steps to attach the worker node to the cluster.

To let the worker node join the cluster, you'll need to use the command you noted. Copy and paste that command, and you should see an output like this:

Image-step8-1

If you see this output, it means the worker node has been successfully attached to the cluster.

Step 9: Deploying a test application

Now that we have our Kubernetes cluster deployed on bare metal servers, let's deploy a simple NGINX server as a test application to see if everything is working as expected.

In Kubernetes, applications are managed using configurations which are referred to as manifests. These manifests are used to define settings and state of the application.

Connect to your master-node (control-plane) and create the manifests as follows:

1) Create Deployments manifest

a) Create a file named deployment.yaml in the root directory

touch deployment.yaml
Enter fullscreen mode Exit fullscreen mode

b) Open the file using Nano

sudo nano deplyoment.yaml
Enter fullscreen mode Exit fullscreen mode

c) Add the following content to the file

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:latest
        ports:
        - containerPort: 80
Enter fullscreen mode Exit fullscreen mode

The file looks like this:

Image-step9-1

d) Apply the changes

kubectl apply -f deployment.yaml
Enter fullscreen mode Exit fullscreen mode

Output:

Image-step9-2

  1. Create Services manifest

a) Create a file named service.yaml in the root directory

touch service.yaml
Enter fullscreen mode Exit fullscreen mode

b) Open the file using Nano

sudo nano service.yaml
Enter fullscreen mode Exit fullscreen mode

c) Add the following content to the file

apiVersion: v1
kind: Service
metadata:
  name: nginx-service
spec:
  selector:
    app: nginx
  ports:
  - protocol: TCP
    port: 80
    targetPort: 80
  type: LoadBalancer
Enter fullscreen mode Exit fullscreen mode

The file looks like this:

Image-step9-3

d) Apply the changes

kubectl apply -f service.yaml
Enter fullscreen mode Exit fullscreen mode

Output:

Image-step9-4

3) Verify the deployment

kubectl get pods
Enter fullscreen mode Exit fullscreen mode

You should see the status of two pods as Running:

Image-step9-5

Also, check the service:

kubectl get servives
Enter fullscreen mode Exit fullscreen mode

Output:

Image-step9-6

4) Access the application

Copy the nginx service PORT from the output of the previous step and use the below command to access the application on your browser

http://WORKER_NODE_IP:PORT
Enter fullscreen mode Exit fullscreen mode

Output:

Image-step9-7

And if you see a page like this, congratulations, the test application has been successfully deployed and working!

Conclusion

In this blog, we covered the key concepts of deploying Kubernetes on bare metal, including the benefits of using bare metal over VM for this use case. In addition to this, we also created and deployed a simple NGNIX application to test our Kubernetes deployment. We have used bare metal servers powered by NodeShift for the demonstration in this guide. For optimal Kubernetes performance, NodeShift offers high-performance computing and GPU nodes, ensuring you get the most out of your workloads with reliable scalability and support at a much affordable rate without compromising on the standards.

For more information about NodeShift:

Top comments (2)

Collapse
 
trungnguyen1697 profile image
Trung Nguyen Thanh

Very insightful. Thanks!

Collapse
 
aditi_b profile image
Aditi Bindal

Glad it helped!