Kubernetes is a powerful container orchestration tool that can help you manage and deploy your monorepo application. In this blog, we will cover the basics of Kubernetes and how to use it to deploy your application. By the end of this blog, you should have a good understanding of Kubernetes and be able to deploy your own application using Kubernetes.
Containerized applications can be managed and scaled with ease with Kubernetes, an open-source platform. By automating deployment, scaling, and management across multiple hosts, the infrastructure becomes more efficient and reliable. It ensures containers run where they should be and maintain the desired application state.
Kubernetes creates a cluster of nodes, with a master node controlling and managing the entire cluster. The master node makes decisions about deploying, scaling, and scheduling containers.
K8s support automatic scaling, load balancing, and self-healing, Kubernetes manages multiple containers and services efficiently. In Kubernetes, developers can focus on building and delivering applications, while the platform manages containers at scale.
Now, let's dive deep into kubernetes architecture.
Smallest component of the kubernetes is pod. it is a wrapper for a docker container. it represents a single instance of a running process in a cluster. pods are designed to be short term, and they can be created, destroyed, and replaced as needed.
In a cluster, there would be multiple instances of pods deployed in a space called "node" or also called a "worker node". node can only store limited amount of pods as it depends on its storage capacity and memory allocation.
Multiple nodes will create a node group, and that node groups are commonly used in cluster deployments to organize and manage the worker nodes based on specific criteria, such as hardware specifications, availability zones, regions, or any other desired attribute.
If you have multiple pods for same application containers, the service will be used to route a traffic coming from the outside of node. in kubernetes, the service acts as a internal load balancer and distributes network traffic across the multiple pods that are part of that service.
There are different types of services available in kubernetes:
NodePort: in this type of service is exposed on a particular port defined in the configuration for each selected node in the cluster. it will create a port range and expose it to a node that will later forward traffic to the service.
ClusterIP: it exposes the service on an internal cluster IP address and is only accessible within the cluster. and it is a default service type.
LoadBalancer: this service type creates an external load balancer and assigns unique IP address. once it is created, it automatically routes traffic and distributes load.
It will manage external traffic (traffic coming from outside of the cluster) it will provide a way to configure routes to handle services based on defined rules. ingress is act as a entry point from incoming traffic into the cluster. ingress itself do not manage load balancing. it relies on ingress controller, which continuosly monitor resources within the cluster. it an external service which has capabilities to handle outside traffic coming into the cluster. for more understanding, take a look on below image.
Deployment is an api object that provides declarative yaml config specifying parameters such as the desired number of replicas, the container image to use, environment variables, resource limits, and more, allowing you to deploy and manage the lifecycle of your applications. to increase the replicas for the same container change the values with key "replicas". It will create another pod of the same container.
apiVersion: apps/v1 kind: Deployment metadata: name: server-deployment labels: app: server spec: replicas: 1 selector: matchLabels: app: server template: metadata: labels: app: server spec: containers: - name: server image: starter_server:latest imagePullPolicy: Never ports: - containerPort: 3000 resources: limits: memory: "256Mi" cpu: "1000m" envFrom: - secretRef: name: secret
in addition to deployment yaml file, there is a standard way to manage environment variables with the use of Secret, format to store variables. for more information checkout this page.
apiVersion: v1 kind: Secret type: Opaque metadata: name: secret data: NODE_ENV: ZGV2ZWxvcG1lbnQ= PORT: MzAwMA==
And service file for the deployment of application. to make it work, make sure to attach the same metadata and selector names into the deployment file as it uses by master node to identify the application's resources.
apiVersion: v1 kind: Service metadata: name: starter-service spec: selector: app: server type: NodePort ports: - protocol: TCP port: 3000 targetPort: 3000 nodePort: 31110
Kubernetes provides horizontal scalability, allows you to scale your applications by adding or removing pods easily. it maintains application availability by distributing them across multiple nodes with service, and supports minimum downtime by providing rolling upgrades and automatic rescheduling and replacement on healthy nodes in the event of node failure.
In the next blog, we will see the usage of helm charts and its usage in microservice applications.
Thanks for reading this. If you have any queries, feel free to email me at firstname.lastname@example.org.
Until next time!