DEV Community

How to Install K3S on AWS EC2 & Deploy Simple JS Game 🚀

In this tutorial you are going to learn How to setup K3S Cluster in EC2 and Join worker nodes and deploy a simple JavaScript Game.

First we need to create EC2 Instances.If you don't know how to create an EC2 Instance you can check this article.

Before You Start

You need

  • 3 EC2 instances (1 master, 2 workers)
  • Configured with permissions for the necessary ports (e.g. SSH, K3s API, NodePort).
  • AWS CLI configured for your account.
Hardware Requirements:

Hardware requirements scale based on the size of your deployments. Minimum recommendations are outlined here.

1. Minimum Hardware Requirements for Master Node:

The master node in a Kubernetes cluster manages the control plane, handling tasks like scheduling, maintaining cluster state, and managing the API server. As such, it requires adequate resources to perform these tasks efficiently.

  • CPU: 2 vCPUs (minimum)
  • RAM: 2 GB (minimum); 4 GB or more recommended for production environments
  • Storage: 10 GB (minimum)

2. Minimum Hardware Requirements for Worker Nodes:

Worker nodes in a Kubernetes cluster host the application workloads, running containers that are orchestrated by the master node. The requirements for worker nodes depend on the workloads they will run, but the following are the general minimum recommendations:

  • CPU: 1 vCPU (minimum); more depending on workload demands
  • RAM: 1 GB (minimum); 2 GB or more recommended for moderate workloads
  • Storage: 10 GB (minimum)
  • Network: High-speed and low-latency network connectivity, especially for communication with the master node and other worker nodes

Example Configuration for AWS EC2:

  • Master Node: t3.medium (2 vCPUs, 4 GB RAM) for a small cluster
  • Worker Nodes: t3.small (2 vCPUs, 2 GB RAM) or t3.medium for moderate workloads

We can see we have 3 Instances running

AWS ec2 instance console

Inbound Rules need to allow for K3s Nodes

Protocol Port Source Destination Description
TCP 2379-2380 Servers Servers Required only for HA with embedded etcd
TCP 6443 Agents Servers K3s supervisor and Kubernetes API Server
UDP 8472 All nodes All nodes Required only for Flannel VXLAN
TCP 10250 All nodes All nodes Kubelet metrics
UDP 51820 All nodes All nodes Required only for Flannel Wireguard with IPv4
UDP 51821 All nodes All nodes Required only for Flannel Wireguard with IPv6
TCP 5001 All nodes All nodes Required only for embedded distributed registry (Spegel)
TCP 6443 All nodes All nodes Required only for embedded distributed registry (Spegel)
  • The K3s server needs port 6443 to be accessible by all nodes.
  • Typically, all outbound traffic is allowed.

Step 1: Install K3S on Master Node

In this tutorial I am using Ubuntu 22.04. If you are using any old version of Ubuntu you may face some issues with iptables. Check more here:

Now

  • 1.0 SSH your Master node (i am using web console)

ssh-console-k3s-master

Run Following Commands on terminal.

  • 1.1. sudo su (for root privileges)

    move-sudo

  • 1.2 apt update

apt-update

  • 1.3. Now Install k3s on master node.

Command: curl -sfL https://get.k3s.io | sh -

k3s-installation-done

  • 1.4 Check Status

systemctl status k3s

k3s-status

Now Get the node token (needed to join worker nodes):

You need this token to securely join worker nodes to the master node (also known as the control plane).

For Example

  • Authentication
  • Secure Communication
  • Cluster Management
  • Simplified Node Addition

Command
sudo cat /var/lib/rancher/k3s/server/node-token

k3s-master-token

Step 2: Prepare Worker Node

Same Above we need to create 2 EC2 instances for worker nodes. For this tutorial we have two t3.medium EC2 machine. Now SSH both worker nodes and perform the below tasks:

  1. Install Docker in the both worker machine.

  2. Join the machine in master node using the token.

  • 2.0 Change the host name to Workwer-1 & Worker-2
  • 2.1 Install Docker

Commands:

  • sudo su
  • apt update
  • apt install docker.io -y

install docker

  • 2.2 Check Status

  • systemctl status docker

Worker-1
docker-status-w2

Worker-2

docker-status-w2

Step 3: Join Worker Nodes to the Cluster (Master Node)

First get the token form master node. We already collect the token earlier this tutorial. Now to join the nodes in master nodes run this command from worker nodes.

Command
curl -sfL https://get.k3s.io | K3S_URL=https://<MasterNodePublicIP>:6443 K3S_TOKEN=<NodeToken> sh -

Here change the <MasterNodePublicIP> with your EC2 master node Public IP & <NodeToken> with your token from master node.

Worker-1

Worker-1

Worker-2
Worker-2

Now Check the nodes list from master node

Command: kubectl get nodes

nodes-list

Step 4: Now Deploy a JS Game on k3s:

To run an application on K3s, we need to create Kubernetes deployment and service files.

First Create a Namespace

Kubernetes namespace is like a separate section within a Kubernetes cluster, where you can keep your resources organized and isolated from others. Think of it as different folders on your computer. Each namespace (or folder) can have its own set of applications, settings, and permissions, and they won’t interfere with each other.

Command: kubectl create namespace js-game

create-namespace

Now Create Kubernetes Deployment File:

Command: nano js-game-app.yml

yaml-file

Now Copy Paste the code into js-game-app.yml

deployment-yml-file

in the Image line you can update your dockerhub or any image link.

Applying the Manifest

Save this configuration to a file named js-game-app.yml and apply it using the following command: kubectl apply -f js-game-app.yml -n js-game
`
You will see this if everything is work correctly....

k3s-service

Verify the Deployment

To check if the deployment, service, and ingress are created successfully, you can use the following commands:

kubectl get deployments -n js-game

developments

kubectl get services -n js-game

services

kubectl get ingress -n js-game

ingress

Now check the pods running....

Command: kubectl get pods -n js-game

running-pods

Now open the browser and visit with your Master Node Public IP

game-window

Ref:

Top comments (0)