Kubernetes is a powerful container orchestration tool that can help you manage and deploy your monorepo application. This blog covers the basics of Kubernetes and how to use it for application deployment. By the end of this blog, you should understand Kubernetes well and be able to deploy your application using it.
Introduction
Kubernetes, an open-source platform, simplifies managing and scaling containerized applications. By automating deployment, scaling, and management across multiple hosts, it enhances infrastructure efficiency and reliability. Kubernetes ensures containers run where they should and maintain the desired application state.
Kubernetes creates a cluster of nodes, with a master node controlling and managing the entire cluster. The master node makes decisions about deploying, scaling, and scheduling containers.
Kubernetes supports automatic scaling, load balancing, and self-healing, efficiently managing multiple containers and services. Developers can focus on building and delivering applications while Kubernetes manages containers at scale.
Now, let’s dive deep into Kubernetes architecture.
Key Components of Kubernetes
Pod
The smallest component of Kubernetes is the pod. A pod serves as a wrapper for a Docker container, representing a single instance of a running process in a cluster. Pods are designed to be ephemeral, meaning they can be created, destroyed, and replaced as needed.
Node
A node, also known as a worker node, is a space within a cluster where multiple instances of pods are deployed. Each node has limited capacity based on its storage and memory allocation.
Node Group
A collection of nodes forms a node group. Node groups are used in cluster deployments to organize and manage worker nodes based on specific criteria such as hardware specifications, availability zones, regions, or other desired attributes.
Service
In Kubernetes, a service routes traffic from outside the node to multiple pods running the same application containers. It acts as an internal load balancer, distributing network traffic across the pods that are part of the service.
There are different types of services in Kubernetes:
NodePort: This service type exposes the service on a specified port on each selected node in the cluster. It creates a port range and forwards traffic to the service.
ClusterIP: This default service type exposes the service on an internal cluster IP address, making it accessible only within the cluster.
LoadBalancer: This service type creates an external load balancer and assigns a unique IP address. It automatically routes and distributes incoming traffic.
Ingress: Ingress manages external traffic (traffic coming from outside the cluster) and provides a way to configure routes to handle services based on defined rules. It acts as an entry point for incoming traffic into the cluster. However, ingress does not handle load balancing by itself; it relies on an ingress controller, which continuously monitors resources within the cluster and manages external traffic.
For more understanding, refer to the image below.
Deployment
A Deployment is an API object in Kubernetes that provides a declarative YAML configuration specifying parameters such as the desired number of replicas, the container image to use, environment variables, resource limits, and more. This configuration allows you to deploy and manage the lifecycle of your applications.
To increase the number of replicas for the same container, you simply change the value of the “replicas” key in the configuration file. This will create additional pods of the same container, ensuring scalability and availability of your application.
apiVersion: apps/v1
kind: Deployment
metadata:
name: server-deployment
labels:
app: server
spec:
replicas: 1
selector:
matchLabels:
app: server
template:
metadata:
labels:
app: server
spec:
containers:
- name: server
image: server:latest
imagePullPolicy: Never
ports:
- containerPort: 3000
resources:
limits:
memory: "256Mi"
cpu: "1000m"
envFrom:
- secretRef:
name: secret
In addition to the Deployment YAML file, there is a standard way to manage environment variables using Secrets. Secrets store sensitive information, such as passwords and tokens, in a secure format. For more information on how to manage Secrets with kubectl, check out this page.
apiVersion: v1
kind: Secret
type: Opaque
metadata:
name: secret
data:
NODE_ENV: ZGV2ZWxvcG1lbnQ=
PORT: MzAwMA==
To deploy your application, you will also need a Service file. This file ensures that the deployment works correctly by attaching the same metadata and selector names used in the Deployment file. This consistency allows the master node to identify the application’s resources accurately.
apiVersion: v1
kind: Service
metadata:
name: server-service
spec:
selector:
app: server
type: NodePort
ports:
- protocol: TCP
port: 3000
targetPort: 3000
nodePort: 31110
Conclusion
Kubernetes offers horizontal scalability, enabling you to scale your applications by easily adding or removing pods. It ensures application availability by distributing pods across multiple nodes using services, and supports minimal downtime through rolling upgrades, automatic rescheduling, and replacement on healthy nodes in the event of node failure.
In the next blog, we will explore the usage of Helm charts and their application in microservices.
Thanks for reading! If you have any queries, feel free to email me at harsh.make1998@gmail.com.
Until next time!
Top comments (5)
I get it that you give some manifests as examples for a single module application.
But what about the monorepo part ? Did you mean monolith ?
I am sure he meant monolith but forgot to rename it to monolith instead of monorepo
In fact he renamed the title after my comment, but did not bother responding here.
hey @bcouetil , thank you for the correction. sorry I have forgot to mention you in a comment.
great stuff! thanks so much for this.