In this article I will be giving you an introduction about Kubernetes.
You can watch the full video on Youtube:
So what we will cover today:
- Life before containerisation
- Container deployment
- What is Container orchestration
- What is Kubernetes
- Why use Kubernetes
- Some benefits of Kubernetes
- Architecture of Kubernetes
Please like, share and subscribe if you like the video. It will really help the channel
There are 2 main ways to deploy applications before utilising containers:
All applications, services where hosted under a single server. Which mean the server didn't have the single responsibility implementation. The server was jam-packed with everything, multiple applications run on a server, there can be instances where one application would take up most of the resources, and as a result, the other applications would underperform.
- Easy to host
- Easy to maintain
- Very hard to manage
- very hard to scale
- Issues with resource allocation for all of the different services
As a solution to Single Server Deployment, virtualisation was introduced. It allows you to run multiple Virtual Machines (VMs) on a single physical server's CPU. Every application has its own Virtual Machine, which mean complete separation of concerns.
This implementation still remain a valid option in todays world. We can implement scalability and resource implementation in a much better way then Single Server Deployment
- Less downtime: in case 1 service is down the other ones still functions
- Better load handling
- Better resource allocation
- Difficult to manage environement
- Security risks
- Require a lot of resources
- Consistency of the VM images, which might cause error with the application
Containers are Similar to VM, but they have relaxed isolation properties. Which mean unlike the VM containers share the OS and they utilise its kernel to run.
- CI/CD continuous integration/continuous deployment pipelines
- Environment consistency
- Resource isolation
- Resource utilisation
- Loosely coupled applications
- Learning curve to implement
- Complex structure of simple application
- Migration from VM to container deployment is time consuming
With Docker we can run a single instance of an image with the docker run command. What happens when we the number of user increases an a single instance is not enough. One way of doing it is running docker run commands multiple times.
We need to keep an eye on what containers are running and what are the states of these containers and in case there is a failure we need to re initiate a new docker run so we can have a new instance of the application.
Another aspect is the health of the docker host. What happens if the docker host crash and we cannot initiates it, the containers running on that host will become inaccessible as well.
We need a dedicated Engineer who keeps monitoring the containers and their health.
Container Orchestration: a solution that consist of tools and scripts that will help us host containers in production environment.
- it consist of multiple docker host that can host containers if one of them fails the other ones is still accessible to the others.
- it allows us to deploy hundred of container instances of our applications in a single command
- Allow us to scale up and scale down base on what we need
- Allow advance networking between containers
- Load balancing across different requests
With docker ⇒ docker CLI ⇒ single instance in single command
with Kubernetes ⇒ kubectl ⇒ 1000 instance in single command
Kubernetes is an open-source Container Orchestration platform for managing containerised workloads and services like Docker, that facilitates both declarative configuration and automation.
We can script and automatically allocate resources to nodes inside the Kubernetes environment. it allow the infrastructure to run much more effectively and efficient. It takes care of scaling and failover for your application, provides deployment patterns, and more.
Kubernetes are referred to as k8s
Kubernetes has been built to provide us with a reliable infrastructure. Its a tool that help us manage the containers that we have. It has a modular in architecture which makes it easy to maintain and scal.
it give us the ability to deploy and update application at scale is the main reason of why it was created. It allows us to deploy our application to thousands of instances.
At its core, Kubernetes allows us to remove the manual process that we do in hosting and managing containers.
- Highly portable and 100% open-source: Kubernetes is compatible across platforms. And its managed by the cloud native computing foundation
- Workload scalability: k8s is very efficient, new instances are being added and removed easily and without any down time. It handle all container scaling with ease.
- High availability: k8s is designed to tackle the availability of both containers and infrastructure, it tackle the requirement of having an environment where it is highly efficient and highly energised and it will be available.
- Designed for deployment: speed up the ability to test, deploy and manage different phases of the deployment lifecycle.
- Service discovery and load balancing: k8s can expose a container using DNS or IP. If there is high traffic load balancing is handled by the k8s cluster and it distribute the traffic across the network to make the deployment stable
- Storage Orchestration: k8s gives the ability to use local storage or any cloud storage. Local storage can be an SSD that the k8s is running on, or if k8s is connected to public cloud like Azure, AWS we can utilise the cloud storage and utilise all of the security features the cloud is giving us.
- Self healing environment: in case there is a failure or a container is not responding anymore, k8s will detect that behaviour and k8s will restart the process or kill that container and initiate a new one.
- Automated rollout and rollbacks: Desired state of containers can be describe using k8s. The actual state of a container changes to the desire state at a controlled rate. So we can roll forward or rollback easily
- Automatic bin packing: We can actually specify the compute power that is being used from CPU and RAM each container will need.
- Secret and configuration management: k8s lets you store and manage sensitive information, such as passwords, OAuth tokens, and SSH keys. You can deploy and update secrets and application configuration without rebuilding your container images, and without exposing secrets in your stack configuration.
A k8s cluster consist of a set of nodes,
a node is a machine, physical or a VM on which the Kubernetes software is setup. A node is a worker machine and its where our containers will be launch by k8s.
what happens if one of our machines that is running our containers fails, we need another machine to keep the application running. A cluster is a set of nodes grouped together, to keep the application up
Now who is responsible on managing the cluster, who is going to be sending information to the cluster about the containers and configuration, how the nodes handle failures, logs.
The k8s architecture is a clustered based architecture and it revolves around 2 key areas
- k8s master: which controls all of the activity within the k8s infrastructure
- nodes: which are linux environments which are controlled by the master
k8s Master Node
The master node is a node with k8s control plain components installed, the master watches over the nodes in the cluster and is responsible for the actual orchestration of containers on the worker nodes.
When we install k8s on a system we are installing:
- Container Runtime
- Api Server
The API server acts like the frontend for k8s, the users, cli, management devices all use the api server to communicate with the cluster
- Api Sever: is a restful based infrastructure, we can secure each connection
- its the main tool that clusters and nodes communicate
- it implement interface so different tools and libraries will be able to communicate effectively
- interact with the worker nodes and provides them with the required information
- etcd: this is a tool that allow for the configuration and information and management of nodes within a cluster
- distributed reliable key/value store
- Store configuration information, which is required by the nodes in the cluster.
- It has a key/value format.
- The only way to access it is via the API server
- Scheduler: Manages the schedules of activity within the actual cluster
- it is a key component in the k8s cluster
- it is responsible on distributing workload from the master node
- it tracks the utilisation of workload on cluster nodes and then places the workload on the available resources, it looks for newly created containers and assign them to node
- Controller Manager: is a daemon server the brain behind the orchestration
- it runs in continuous loops and gather information and then its responsible to provide this information to the API server
- they are responsible for noticing and responding when nodes/containers/endpoints fails.
- The controller will make the decision to create new containers to replace the failed ones
- this component is responsible on changing the desired state of a component to the desired state
- Container runtime: is the software used to run containers
- kubelet: is the agent which run on each node in the cluster, the agent is responsible to make sure the nodes are running as expected
kubectl is the k8s cli which is used to deploy and manage application on k8s cluster. get cluster information and get the status of the node
kubectl run hello-minikube // run a k8s cluster kubectl cluster-info kubectl get nodes // list all of the nodes parts of the cluster
k8s Node is formed of:
- it runs and manage the container that runs inside the node
- it is the container run time
- it is responsible of information sharing between the node and the master API Service
- it interact with etcd to read the configurations and the keys
- k8s proxy
- it runs the k8s services inside the node, it helps making the service available to the external host. As well forwarding requests to the assigned containers
- it perform primitive load balancing, it also manages pods on nodes, volumes, secrets, creation of new containers, and health checkups.
Thank you for reading.