DEV Community

Hussein Alamutu
Hussein Alamutu

Posted on

A Simple Guide to Container Orchestration with Kubernetes

Introduction

You'll get the most out of this guide if your desire is to learn container orchestration with Kubernetes.

The world of container orchestration is complex and ever-changing, but you can easily understand the basics, and that basic knowledge can make a big difference. There are also free Kubernetes courses widely available on the web, including guides like this.

Combine the information gotten from all these guides and courses with some practice and you are well on your way to becoming proficient in container orchestration.

What you'll learn

This guide focuses on an intermediate topic in Linux system administration. After reading through, you are expected to understand:

  • What is container orchestration
  • What is Kubernetes and
  • Container orchestration with Kubernetes using AWS.

The basics of container orchestration

Container orchestration is a part of a CI/CD pipeline in DevOps, it is used to automate the management, deployment, scaling and networking of containers.

In the CI/CD process, there are several container orchestration platform, the two major ones are Kubernetes and Docker swarm. This guide will focus on Kubernetes.

If you are familiar with CI/CD or DevOps, you should already know what containers are, they are a form of virtual machines that houses an operating system, they can be used to run a small microservice application and even large applications.

Why Orchestration?
Orchestration is the automated management of the lifecycle of our application

  • Orchestration helps us automate our deployment process for continuous deployment
  • Orchestration helps us handle complicated workflows in deploying our application

Understanding how containers work

Containers are like a standardized packaging for microservices applications with all needed application code and dependencies.

Prior to containerization, the traditional way of building and deploying codes was to run your software on a physical server where you have to install or use an existing operating system and also install dependencies for the software.

Meanwhile, containers are similar to virtual machines(VM) you create on your local machine when building applications, with VM you are able to create a replicate of your local machine, this allows you to run different versions of software dependencies.

Virtualization vs Containerization

In simpler words, using virtual machines, I am able to build two different applications that requires two different versions of, let's say node i.e. node v17.2.0 and node v18.6.0. If you build the two apps on your local machine it's impossible to use two different node versions at once.

Now, the best part, while containers are similar to VM, containers are way cooler. Containerization enables you to deploy multiple applications using the same operating system on a single virtual machine or server.

What is Kubernetes?

Kubernetes is a container orchestration system packed with features for automating an application’s deployment, it easily scales applications and ship new codes.

Well, read what Kubernetes' docs have to say

Kubernetes is a portable, extensible, open source platform for managing containerized workloads and services, that facilitates both declarative configuration and automation.

Kubernetes consists of two major components for the continuous deployment process, pods and services.

Pods are abstractions of multiple containers and are also ephemeral. It's not uncommon to see a deployment involving a few containers to be deployed, hence the formation of pods.

Services are an abstraction of a set of pods to expose them through a network. Applications are often deployed with multiple replicas to help with load balancing and horizontal scaling.

NOTE:

  • Load Balancing: handling traffic by distributing it across different endpoints
  • Horizontal Scaling: handling increased traffic by creating additional replicas so that traffic can be divided across the replicas
  • Replica: a redundant copy of a resource often used for backups or load balancing

A practical overview

Prerequisites
This practical aspect assumes you have a previous knowledge on creating images and containers with docker and basic knowledge on cloud computing with AWS.

What you will need

  • A command line interface
  • A docker hub account
  • A docker image and container
  • An AWS account

Time to get dirty, wear your gloves, get your tools ready, let's get started.

Installing Kubernetes

There are many services that can be used to setup Kubernetes but this guide will focus on using AWS.

Why AWS?
Setting up Kubernetes from scratch it's a bit complicated, but AWS makes it easier. This guide will walk you through how to create Kubernetes cluster using AWS and how to create a node on AWS.

Note:

  • Kubernetes cluster consists of a set of nodes, that run containerized applications. When you deploy Kubernetes you get a cluster and every cluster has at least one worker node.
  • Nodes is an abstraction of pods managed by a control plane, the control plane handles scheduling of the pods across the Nodes in the cluster.

Getting our container ready

The Underlying Process
Docker images are loaded from the container registry into Kubernetes pods. Access to the pods are exposed to consumers through a service.

To deploy an application on Kubernetes, there are two basic configuration files that need to be created, can either be written in YAML or JSON, but YAML is the most used one. YAML is a great way to define our configurations because it is simple and efficient.

The first configuration is the "deployment.yaml" file.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-app
  labels:
    app: my-app
spec:
  replicas: 2
  selector:
    matchLabels:
      app: my-app
  template:
    metadata:
      labels:
        app: my-app
    spec:
      containers:
      - name: simple-node
        image: YOUR_DOCKER_HUB/simple-node
        ports:
        - containerPort: 80
Enter fullscreen mode Exit fullscreen mode

Then the second one "service.yaml" file;

apiVersion: v1
kind: Service
metadata:
  name: my-app
  labels:
    run: my-app
spec:
  ports:
  - port: 80
    protocol: TCP
  selector:
    run: my-app
Enter fullscreen mode Exit fullscreen mode

Once this file is available the next thing to do is to create a Kubernetes cluster using Elastic Kubernetes Service on AWS. Once logged into your AWS console, you can easily find it by typing into the search bar eks, then click on it.

The next step is to create an EKS cluster, navigate to the cluster section in the left panel, click on create cluster,

But before that, open a new tab, search for IAM, now...

Step 1: Create EKS Cluster IAM role:

Navigate to the Roles tab in the Identity and Access Management (IAM) dashboard in the AWS Console

  1. Click Create role
  2. Select type of trusted entity:
  3. Choose EKS as the use case
  4. Select EKS-Cluster
  5. Click Next: Permissions
  6. Click Next: Tags
  7. Click Next: Review
  8. Give the role a name, e.g. EKSClusterRole
  9. Click Create role

Step 2: Create an SSH Pair

  1. Navigate to the Key pairs tab in the EC2 Dashboard
  2. Click Create key pair
  3. Give the key pair a name, e.g. mykeypair
  4. Select RSA and .pem
  5. Click Create key pair

Step 3: Create an EKS Cluster

  1. Navigate to the Clusters tab in Amazon EKS dashboard in the AWS Console
  2. Click Create cluster
  3. Specify: a unique Name (e.g. MyEKSCluster), Kubernetes Version (e.g. 1.21), Cluster Service Role (select the role you created above, e.g.EKSClusterRole)
  4. Click Next
  5. In Specify networking look for Cluster endpoint access, click the Public radio button
  6. Click Next and Next
  7. In Review and create, click Create

It may take 5-15 minutes for the EKS cluster to be created.

Troubleshooting: If you get a message like this:

Cannot create cluster the targeted availability zone does not currently have sufficient capacity to support the cluster
Enter fullscreen mode Exit fullscreen mode

choose another availability zone and try again. You can set the availability zone in the upper right corner of your AWS console.

Step 4: Create a Node Group
Now, go back to the opened IAM tab,

- Create EKS Cluster Node IAM role

  1. In the IAM Roles tab, click Create role
  2. Select type of trusted entity:
  3. Choose EC2 as the use case
  4. Select EC2
  5. Click Next: Permissions
  6. In Attach permissions policies, search for each of the following and check the box to the left of the policy to attach it to the role: "AWSAmazonEC2ContainerRegistryReadOnly, AmazonEKSWorkerNodePolicy, AmazonEKS_CNI_Policy".
  7. Click Next: Tags
  8. Click Next: Review
  9. Give the role a name, e.g. NodeRole
  10. Click Create role

Now, go back to the Cluster tab...

Creating the node

  1. Click on the Compute tab in the newly-created cluster
  2. Click Add Node Group
  3. Specify: a unique Name (e.g. MyNodeGroup)
  4. Cluster Service Role (select the role you created above, e.g.NodeRole)
  5. Create and specify SSH key for node group
  6. In Node Group compute configuration, set instance type to t3.micro and disk size to 4* to minimize costs
  7. In Node Group scaling configuration, set the number of nodes to 2
  8. Click Next
  9. In Node Group network configuration, toggle on Configure SSH access to nodes
  10. Select the EC2 pair created above (e.g. mykeypair)
  11. Select All
  12. Click Next
  13. Review the configuration and click "Create"

At this point, we have a Kubernetes cluster set up and understand how YAML files can be created to handle the deployment of pods and expose them to consumers.

Moving forward, the Kubernetes command-line tool (kubectl), will be used to interact with our cluster. The YAML files that we created will be loaded through this tool.

Interacting With Your Cluster

The next step is to load the Infrastructure as Code(IaC) configurations.

Loading YAML files, these are the deployment.yaml and service.yaml files. Load them to Amazon EKS resource using the following commands:

kubectl apply -f deployment.yaml
Enter fullscreen mode Exit fullscreen mode

Step 1: Deploy resources

Send YAML files to Kubernetes to create resources. This will create the number of replicas of the specified image:

kubectl apply -f deployment.yaml

and create the service:

kubectl apply -f service.yaml

Step 2: Confirm deployment

Verify that the resources have been created:

kubectl get pods

and check to see that they were created correctly:

kubectl describe services

To get more metadata about the cluster, run:

kubectl cluster info dump

By loading these configuration files to the Kubernetes cluster, you have set up Kubernetes to pull your created Docker image from your DockerHub's account.

Troubleshooting

If you ran into some issues:

  • Check to see if node groups was successfully created in your EKS cluster. If it wasn't, Kubernetes won't have enough pods setup.
  • If you get an error message concerning not been able to pull your Docker images, confirm that your DockerHub repo is set to public.

What Next?

The next step is to secure and tune Kubernetes services for production.

Managing Cost

We can start with configuring the clusters to greatly reduce costs(i.e specifying the number of replicas to be created and also minimize the resources used such as CPU).

Security Consciousness
Make sure your Kubernetes cluster are secured from those with malicious intent, you can do that by configuring who has access to the Kubernetes pods and services.

Applications deployed for production use varies differently from the ones for development. In the case of production the application is no longer running in an isolated environment. It has to be configured with least access privileges to prevent unexpected traffic and allow only expected traffic to access it.

Preparing for Scaling and Availability

Ensure the Kubernetes service is able to handle the number/size of user requests and the application is responsive i.e. able to be used when needed.

One way to ascertain this prior to releasing your application to a production environment is to use load testing mechanism, this simulates a large number of requests to our application, which helps us set a baseline understanding of the limits of our application.

Handling Backend Requests with Reverse Proxy

A reverse proxy is a single interface that forwards requests from the frontend(i.e. user of our application) and this appears to the user as the origin of the responses.

An API gateway functions as a reverse proxy that accepts API requests from users, get the requested services and return the right result.

Nginx is a web server that can be used as a reverse proxy, configurations can be specified with an nginx.conf file.

Accessing the Kubernetes Cluster using Reverse Proxy

First, check the pods to see what is running, using:
kubectl get pods

Then, check the services to find the entry point for accessing the pod:

kubectl describe services

In the output you will find the service Name(i.e. alamz-app-svc) and Type(ClusterIP) which means that the service is only accessible within the cluster.

Now, create an nginx.conf file. A sample nginx.conf file looks like this:

events {
}
http {
    server {
        listen 8080;
        location /api/ {
            proxy_pass http://alamz-app-svc:8080/;
        }
    }
}
Enter fullscreen mode Exit fullscreen mode

The Nginx service listens for requests at port 8080. Any requests with endpoints prefixed with /api/ will be redirected to the Kubernetes service alamz-app-svc. alamz-app-svc is a name that our Kubernetes cluster recognizes internally.

Put this in your Dockerfile:

FROM nginx:alpine

COPY nginx.conf /etc/nginx/nginx.conf
Enter fullscreen mode Exit fullscreen mode

The nginx.conf sets up the service to listen for the requests that come in to port 8080 and forward any requests to the API endpoint to http://my-app-svc endpoint in the app.

Setting Up the YAML files for the Reverse Proxy

The setup is similar to the previous yaml file created in this guide. The basic functionality is to: Create a single pod named reverseproxy and configure it to limit resources.

Your "deployment.yaml" file should look like this:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  labels:
    service: reverseproxy
  name: reverseproxy
spec:
  replicas: 1
  template:
    metadata:
      labels:
        service: reverseproxy
    spec:
      containers:
      - image: YOUR_DOCKER_HUB/simple-reverse-proxy
        name: reverseproxy
        imagePullPolicy: Always          
        resources:
          requests:
            memory: "64Mi"
            cpu: "250m"
          limits:
            memory: "1024Mi"
            cpu: "500m"       
        ports:
        - containerPort: 8080
      restartPolicy: Always

Enter fullscreen mode Exit fullscreen mode

Your "service.yaml" file should look like:

apiVersion: v1
kind: Service
metadata:
  labels:
    service: reverseproxy
  name: reverseproxy-svc
spec:
  ports:
  - name: "8080"
    port: 8080
    targetPort: 8080
  selector:
    service: reverseproxy
Enter fullscreen mode Exit fullscreen mode

Deploying Reverse Proxy

Finally, we are ready to deploy our reverse proxy. Deploy the reverse proxy using:

kubectl apply -f reverseproxy_deployment.yaml
kubectl apply -f reverseproxy_service.yaml
Enter fullscreen mode Exit fullscreen mode

kubectl is the command line tool used to interact with our kubernetes cluster and the YAML file is the Infrastructure as Code(IaC) that specifies the configuration for our reverse proxy.

Gracias Amigos

Congrats, If you have carefully followed up to this point, gracias amigo!, while I am fully aware this guide isn't an all inclusive one, I make sure to point you to the right direction, where you can get the information that wasn't included.

Meanwhile, I am putting this guide out here to serve as the MVP(minimum viable product)... what? Is this now a product management class? Well.. No. I am just a Cloud DevOps Engineer with a Product Design background that somehow fell in love with writing. Okay, enough of me.

Let me cut to the chase, if there is anything you want me to go deeper on, or an information you feel I skipped, please, make sure to inform me, so I can Include it in the next release.

Okay, that's it fellow techies. See Ya! Ehhmm, one sec, to supplement this guide I will write a short article that talks about securing clusters and containers, and resource quota management.

Top comments (0)