DEV Community

Cassius Clay Filho
Cassius Clay Filho

Posted on

Discovering Kubernetes: First Steps and Basic Concept

Welcome to the world of Kubernetes, where the complexity of managing containerized applications is transformed into a more simplified and agile adventure. Imagine being able to scale, distribute, and manage your applications with just a few commands. That's the power of Kubernetes, an essential tool in the toolbox of developers and system operators. Whether you're starting your journey or just curious about what makes Kubernetes a highly talked-about name, this article will assist you. Let's demystify Kubernetes and show how it can be your ally in developing modern applications.


What is Kubernetes?

Kubernetes is an open-source system that was created by Google and is now maintained by the Cloud Native Computing Foundation (CNCF), which automates the deployment, scaling, and operation of containerized applications. Think of it as a conductor who coordinates all components of an orchestra to create perfect harmony, but in this case, the orchestra consists of application containers that need to be efficiently managed and scaled.


Why does Kubernetes become a great ally?

Using Kubernetes brings several advantages: it simplifies automation, enhances scalability, facilitates container management, and promotes portability across different hosting environments. It's like having a personal assistant to manage your applications, ensuring they are always available, regardless of traffic volume.


Main Components of Kubernetes
To further understand k8s, also known as Kubernetes, we need to know the main components that make up the architecture of this powerful container orchestrator, some of which are:

  • Pods: The smallest deployment unit that groups one or more containers with shared resources.
  • Services (services): Define how Pods are accessible on the network. They act as internal load balancers or external access points.
  • Volumes: Provide a persistent storage system for data used by containers.
  • Namespaces: Allow the organization of resources into isolated groups within the same cluster, facilitating management in environments with multiple projects or teams.

How does Kubernetes facilitate day-to-day work?
Kubernetes manages your applications by automatically detecting and responding to container failures, balancing network traffic, and scaling resources as needed. This means less worry about infrastructure and more focus on development and innovation.


First Steps in the World of Kubernetes
Starting with Kubernetes is simpler than it seems. With tools like Minikube and kubectl, you can create your local cluster to experiment and learn without the need for expensive cloud resources or complex configurations.

Getting Hands-On: An Example Application
Let's deploy an example application step by step. We start by creating a Pod to host our application, then configure a Service to expose our Pod to the network, and finally, scale our application by increasing the number of Pods through a Deployment.

In this topic as a practical example, let's create a simple application using Nginx, a popular web server that can be easily deployed on a Kubernetes cluster. This example will teach you how to create a Pod, expose that Pod to the network using a Service, and finally, scale the application with a Deployment.

Prerequisites
Have Kubernetes installed. For local environments, Minikube is a great, cost-free choice. If you prefer the cloud, many providers offer free tiers to create a Kubernetes cluster but be aware of potential extra costs. Choose the environment that best suits your learning and budget, allowing you to explore Kubernetes without financial worries.

Have Kubectl installed, the Kubernetes command line to interact with the cluster.

Step 1: Creating a Pod with Nginx
Pod Definition: First, you will create a YAML file to define the Pod that will run Nginx. Save this file as nginx-pod.yaml.

For the actual YAML configuration, remember to specify the necessary fields such as the API version, kind (Pod in this case), metadata (like the name of the pod), and the spec that details the container image to use (nginx, for example), and any ports it should expose. This step is crucial for setting up your application in a Kubernetes environment, allowing you to run and manage Nginx within a Pod effectively.

apiVersion: v1
kind: Pod
metadata:
  name: nginx-pod
spec:
  containers:
  - name: nginx
    image: nginx:latest
    ports:
    - containerPort: 80
Enter fullscreen mode Exit fullscreen mode

Creating the Pod: To create the Pod in Kubernetes

kubectl apply -f nginx-pod.yaml

To check if the Pod is running, you can use the kubectl get pods command.

Step 2: Exposing the Pod with a Service
To make Nginx accessible outside the Kubernetes cluster, you will create a Service that exposes the Pod on the network.

Service Definition: Create a YAML file for the Service named nginx-service.yaml. This file should specify the type of Service (e.g., NodePort, ClusterIP, or LoadBalancer), targeting the Pod using selectors that match the labels defined in your Pod's YAML. This step is crucial for enabling external access to your application, allowing users and other services to communicate with your Nginx server through a defined access point.

apiVersion: v1
kind: Service
metadata:
  name: nginx-service
spec:
  selector:
    app: nginx
  ports:
    - protocol: TCP
      port: 80
      targetPort: 80
  type: LoadBalancer
Enter fullscreen mode Exit fullscreen mode

Note that the selector must match the labels of the Pod (in this simplified example, you would need to add labels: app: nginx to the Pod definition for the Service to function correctly).

Creating the Service: Apply the Service using the command kubectl apply -f nginx-service.yaml. This command tells Kubernetes to set up the network environment as defined in your YAML file, linking the Service to the Pod through matching selectors and labels. This makes Nginx accessible as specified in the Service type, facilitating communication with the Pod inside and outside the Kubernetes cluster.

Accessing Nginx: If you are using Minikube, you can retrieve the Service URL with the command:

minikube service nginx-service --url

This step involves using a Minikube-specific command to obtain the external access point of your Nginx service. This command is crucial for testing and verifying that your Nginx server is accessible from outside the Kubernetes cluster, allowing you to interact with your application as end-users would.

Step 3: Scaling the Application with a Deployment
To easily manage and scale the Pod, you will create a Deployment.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  replicas: 2 
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:latest
        ports:
        - containerPort: 80
Enter fullscreen mode Exit fullscreen mode

Deployment Definition: Create a YAML file for the Deployment named nginx-deployment.yaml. This file specifies how Kubernetes should manage your application's Pods and how many instances of the Pod should be running at any given time. By defining a Deployment, you can easily scale your application up or down by adjusting the number of replicas, allowing for more robust and flexible management of your Nginx server within the Kubernetes environment.

Creating the Deployment: Use the following command: kubectl apply -f nginx-deployment.yaml

This step involves using a kubectl command to apply the deployment configuration from your nginx-deployment.yaml file. This command instructs Kubernetes to create and manage the desired state of your application as defined in the deployment, including the number of replicas, Pod template, and update strategy. It's a crucial step for scaling and managing your application dynamically within the Kubernetes environment.

Scaling the Deployment: To increase the number of replicas, use the command:kubectl scale deployment/nginx-deployment - replicas=3

This command allows you to adjust the number of Pod instances for your application within the Kubernetes environment, enabling you to scale up to meet increased demand or scale down to conserve resources. This flexibility is a key aspect of using Kubernetes for application deployment and management.


By setting the number of replicas in your Kubernetes deployment, you not only ensure your application's availability but also enable effective load balancing across Pods. This ensures that no single instance bears all the traffic, enhancing the application's performance and resilience.

Checking the Deployment: Verify that the replicas are running with the command kubectl get deployment. This process completes the deployment of a Nginx application on Kubernetes, exposing it through a Service and scaling it using a Deployment. This basic example illustrates how to start using Kubernetes to efficiently manage containerized applications. As you become more familiar with Kubernetes, you can explore more advanced features and management techniques for your applications.


Taking the first steps with Kubernetes and deploying your first Nginx application unravels the possibilities this powerful tool offers. Kubernetes not only simplifies container management with its automation and scalability but also paves the way for a new era of developing robust and efficient applications. Remember that practice makes perfect as you continue to explore and delve deeper into its functionalities. So, don't hesitate to experiment, create labs, learn from challenges, and expand your knowledge to master this essential tool in the world of technology.


For more in-depth technical information on Kubernetes, I recommend consulting the official documentation. It's an excellent learning resource and reference for developers and operators at all levels, offering comprehensive guides, tutorials, and reference materials to deepen your understanding of Kubernetes and its capabilities. The official Kubernetes documentation is available at kubernetes.io/docs, where you can find detailed information on setup, deployment, management, and the architecture of Kubernetes.

Top comments (2)

Collapse
 
devinti profile image
Inti

Thank you! Great article!

Collapse
 
cassiusclayb profile image
Cassius Clay Filho

Thank you for the feedback. @devinti