DEV Community

Wonder Agudah
Wonder Agudah

Posted on

Deploying A Kubernetes Cluster Using Google Kubernetes Engine(GKE)

The advent and widespread adoption of virtualisation in the world of computing has not exactly been a smooth journey. There is a constant need for these technologies to be improved. One of these improvements brought about containerization, which is a standalone and lightweight package of an application and its dependencies such as libraries, framework and runtime. The difficulty in managing multiple containers necessitated the advent of Kubernetes(K8s).

Kubernetes is an open source platform for container orchestration, that is, it is used in the management of containerized applications. It was initially a proprietary technology of Google but considering its industry wide usage and increasing popularity and adoption it was open sourced in the year 2014.

K8s was developed to abstract and ease the tedious task of creating, managing, and destroying multiple containers and most significantly to have a central control plane to monitor the execution and operation of various deployments.

The adoption of Kubernetes has been made more accessible to beginners by Google Cloud’s managed Kubernetes service known as Google Kubernetes Engine (GKE). It simplifies the creation, management, deployment and monitoring of Kubernetes clusters.

Before we begin I will like to define some basic concepts to introduce a number of technologies to persons who might not be conversant with Kubernetes or are just starting out.

Pods : A pod is simply an abstraction of multiple containers. Containers need to communicate with each other, thus a pod facilitates this by allowing containers to share storage and networking resources.

Deployment file : Containers and pods are ephemeral, meaning they do not persist forever. Thus a deployment file defines the number of pods to be created

Service : A service refers to how we choose to expose our pods

Nodes : These refer to the underlying compute resources (virtual machines/instances) that a Kubernetes cluster is setup on. In the case of GCP, compute engine, and EC2 instance for AWS.

This demonstration on how to deploy a Google Kubernetes Engine has three principal objectives.

Firstly, to build and modify GKE clusters using the Google Cloud Console.

Secondly, to deploy a Pod using the Google Cloud Console.

Thirdly, to monitor and examine the provisioned cluster and pods.

Let’s get started!

To begin with, you need to be registered as a GCP user. The process involved to do this is simple and you can easily set up your account by following the step by step guide offered on the Google cloud website https://console.cloud.google.com/ Alternatively you can register an account on Qwiklabs https://www.cloudskillsboost.google/ to follow this demonstration.

Now that we have explored the various means to access the Google Cloud console we can login and access the GCP console.

A. Click on the navigation menu and select Kubernetes Engine. Select clusters from the options providedImage description

 

B. Click create from the open dialog boxImage description

 

C. Choose the ’Standard’ configurationImage description

 

The console shows the various fields you can fill to enter the configuration details for the cluster. Depending on your workload and the amount of capacity you will need, you can provision the amount of nodes you will require, machine type and the location closest to your end users to reduce latency.

D. I am creating this cluster in the ‘us-central1a’ and naming it standard-cluster-1’. For this demonstration I will accept the defaults and click ‘create’. (I will not delve into advanced configurations since this article seeks to provide a simple introduction of working with GKE) Image description

 

E. It can take a while for the clusters to be provisioned. As shown below(in the area I’ve marked up)I have been prompted that my cluster creation ‘can take up to five minutes or more’Image description

 

F. As you can see, the cluster was successfully createdImage description

A great benefit of running workloads in the cloud is the elasticity it provides. This section of my demonstration will be on how to perform a simple modification of a GKE Cluster. For instance, while hosting a workload on GKE, more compute resources are needed in your use case. You can simply provision additional nodes as required.

G. Click on the navigation menu and select Kubernetes Engine. Select clusters from the options provided and click on ’standard-cluster-1’Image description

 

H. Click on default pool from the node pools sectionImage description

 

I. Select “RESIZE” at the top of the default pool pageImage description

 

J. Modify the number of nodes from 3 to 4 and click on the “RESIZE” buttonImage description

 

We will now deploy a workload. A reverse proxy will be run in a deployed pod. The reverse proxy will be using is an NGINIX web server.

K. As usual we will go to the navigation menu and select Kubernetes Engine. Select “Workloads” from the options listed. A dialog box opens, click on “Deploy”Image description

 

L. The default container image is “nginix” . This is the latest version so we can proceed and click on “Continue”. (For security purposes and as prescribed by best practices you should configure environment variables when deploying your workload)Image description

 

M. At the bottom of the page there a number of fields we can modify to suit our application needs. As you can see the cluster we created (standard-cluster-1) is available so we can go ahead and click on “DEPLOY”Image description

 

N. After a short while we can see our deployment is completedImage description

Now we can examine the details of our deployed pods

O. Here is an overview of some events and key metrics such as disk usage, CPU and memory consumption. These metrics coupled with logs are what monitoring and observability tools like Prometheus and Grafana ingest to provide analytics, insights, and even for data visualization to meet business needs.Image description

 

EVENTSImage description

YAMLImage description

The YAML find for the configuration of the workload can also be viewed on the YAML tabImage description

Similarly, a granular visibility into the deployed pods can be done by selecting any of these pods under the “Managed pods” sectionImage description

Key metrics of deployed podsImage description

Events indicating the timestamps the pods were created along with their current states.Image description

The YAML tab also provides an overview of the YAML files used in configuring the pod.Image description

 

This article has sought to provide a simple and introductory level understanding of container orchestration using Kubernetes and how you can get started with it in GCP by using GKE to deploy your containerized workloads.

Top comments (0)