DEV Community

shah-angita for platform Engineers

Posted on

Building and Deploying Containerized Applications on Kubernetes

Kubernetes is an open-source system for automating the deployment, scaling, and management of containerized applications. It groups containers into logical units for easy management and discovery, building upon 15 years of experience running production workloads at Google combined with best-of-breed ideas and practices from the community.

Kubernetes Components

Clusters

A Kubernetes cluster is the core building block of the Kubernetes architecture. It comprises multiple nodes, each of which can be a physical, virtual, or cloud instance. Each cluster has multiple worker nodes that deploy, run, and manage containers, and one Control Plane node that controls and monitors the worker nodes.

Nodes

A node is a single compute host in a Kubernetes cluster. Each worker node runs an agent called a kubelet, which the Control Plane node uses to monitor and manage it. Nodes can be physical, virtual, or cloud instances.

Pods

A pod is a group of containers that share compute resources and a network. Kubernetes scales resources at the pod level. If additional capacity is needed to deliver an application running in containers in a pod, Kubernetes can replicate the whole pod to increase capacity.

Deployments

Deployments control the creation of a containerized application and keep it running by monitoring its state in real time. The deployment specifies how many replicas of a pod should run on a cluster. If a pod fails, the deployment recreates it.

Key Features of Kubernetes

Automated Rollouts and Rollbacks

Kubernetes progressively rolls out changes to an application or its configuration while monitoring application health to ensure it doesn't kill all instances at the same time. If something goes wrong, Kubernetes will rollback the change.

Service Discovery and Load Balancing

Kubernetes exposes a container using a DNS name or its own IP address. If traffic to a container is high, Kubernetes can load balance and distribute the network traffic to ensure the deployment is stable.

Storage Orchestration

Kubernetes allows you to automatically mount a storage system of your choice, such as local storage, a public cloud provider, or a network storage system like iSCSI or NFS.

Self-Healing

Kubernetes restarts containers that fail, replaces and reschedules containers when nodes die, kills containers that don't respond to user-defined health checks, and doesn't advertise them to clients until they are ready to serve.

Secret and Configuration Management

Kubernetes deploys and updates secrets and application configurations without rebuilding the image and without exposing secrets in the stack configuration.

Deploying a Containerized Application on Kubernetes

Step 1: Create a Docker Image

First, you need to create a Docker image for your application. This involves writing a Dockerfile that specifies the base image, copies the application code, and sets the command to run the application.

FROM python:3.9-slim

WORKDIR /app

COPY requirements.txt .

RUN pip install -r requirements.txt

COPY . .

CMD ["python", "app.py"]
Enter fullscreen mode Exit fullscreen mode

Step 2: Build the Docker Image

Build the Docker image using the Dockerfile.

docker build -t myapp .
Enter fullscreen mode Exit fullscreen mode

Step 3: Push the Docker Image to a Registry

Push the Docker image to a Docker registry like Docker Hub.

docker tag myapp:latest <your-docker-hub-username>/myapp:latest
docker push <your-docker-hub-username>/myapp:latest
Enter fullscreen mode Exit fullscreen mode

Step 4: Create a Kubernetes Deployment

Create a Kubernetes deployment YAML file that specifies the image and the number of replicas.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp-deployment
spec:
  replicas: 3
  selector:
    matchLabels:
      app: myapp
  template:
    metadata:
      labels:
        app: myapp
    spec:
      containers:
      - name: myapp
        image: <your-docker-hub-username>/myapp:latest
        ports:
        - containerPort: 80
Enter fullscreen mode Exit fullscreen mode

Step 5: Apply the Deployment

Apply the deployment YAML file to create the deployment in Kubernetes.

kubectl apply -f deployment.yaml
Enter fullscreen mode Exit fullscreen mode

Step 6: Verify the Deployment

Verify that the deployment is running by checking the pod status.

kubectl get pods
Enter fullscreen mode Exit fullscreen mode

Step 7: Expose the Deployment as a Service

Expose the deployment as a Kubernetes service to access it from outside the cluster.

apiVersion: v1
kind: Service
metadata:
  name: myapp-service
spec:
  selector:
    app: myapp
  ports:
  - name: http
    port: 80
    targetPort: 80
  type: LoadBalancer
Enter fullscreen mode Exit fullscreen mode

Apply the service YAML file.

kubectl apply -f service.yaml
Enter fullscreen mode Exit fullscreen mode

Step 8: Access the Application

Access the application using the service's external IP.

kubectl get svc myapp-service -o jsonpath='{.status.loadBalancer.ingress.hostname}'
Enter fullscreen mode Exit fullscreen mode

Conclusion

Kubernetes is a powerful tool for managing containerized applications, providing features like automated rollouts, service discovery, storage orchestration, self-healing, and secret management. By following the steps outlined above, you can deploy a containerized application on Kubernetes, ensuring it is scalable, reliable, and easy to manage. This approach aligns well with Platform Engineering principles, ensuring efficient and consistent deployment and management of applications.

Top comments (0)