DEV Community

Cover image for An Overview of Kubernetes Architecture
Ansu Jain
Ansu Jain

Posted on • Updated on

An Overview of Kubernetes Architecture

Kubernetes is an open-source container orchestration platform that automates the deployment, scaling, and management of containerized applications. It provides a highly scalable and flexible environment for deploying and managing containerized workloads.

The Kubernetes architecture consists of several components that work together to provide a stable and scalable environment for containerized applications. In this article, we’ll explore the various components of Kubernetes architecture, as depicted in below diagram.

Kubernetes
At the core of the Kubernetes architecture is the Kubernetes control plane, which consists of the Kubernetes API server, etcd, scheduler, and controller manager. These components work together to manage the Kubernetes cluster.

Pods
A pod is the smallest and simplest Kubernetes object. It represents a single instance of a running process in a cluster. A pod encapsulates one or more containers, storage resources, and network configurations. Pods provide a logical host for containers to run in, and they ensure that containers share resources such as storage volumes and network interfaces.

Nodes
A node is a physical or virtual machine that runs one or more pods. Each node has a unique hostname and IP address, and it is responsible for running and managing the containers that are deployed to it. Nodes provide the underlying compute resources for running the containers.

Clusters
A cluster is a group of nodes that work together as a single computing resource. It provides high availability and reliability by replicating services across multiple nodes. A cluster also ensures that applications are resilient to node failures and can automatically reassign workloads to healthy nodes.

Namespaces
A namespace is a logical boundary within a cluster that provides a way to group related resources. It is commonly used to isolate workloads and prevent conflicts between them. Namespaces also provide a way to partition resources and limit resource usage.

Containers
A container is a lightweight and portable executable package that contains everything needed to run an application, including the code, runtime, libraries, and system tools. Containers provide a consistent and reliable environment for running applications across different computing environments.

Services
A service is an abstraction that provides a stable IP address and DNS name for a set of pods. It acts as a load balancer and ensures that traffic is evenly distributed across the pods. Services also provide service discovery, allowing applications to locate and communicate with other services within the cluster.

Load Balancing
Services can be configured to provide load balancing for a set of pods. Load balancing distributes network traffic evenly across all the pods in the service, improving application availability and scalability.

Service Discovery
Services provide a way to discover the IP address and port of a pod running a particular service. This allows clients to connect to the service without needing to know the IP address or port of the individual pods.

Service Types
Services can be configured to use different types of network access, such as ClusterIP, NodePort, and LoadBalancer.

  • ClusterIP:ClusterIP is the default service type in Kubernetes. It provides a virtual IP address for a set of pods that are only accessible within the Kubernetes cluster.
  • NodePort:NodePort provides a way to expose a service on a static port on each worker node in the cluster. This allows the service to be accessed from outside the cluster.
  • LoadBalancer:LoadBalancer provides a way to expose a service externally using a cloud provider’s load balancer. This allows the service to be accessed from outside the cluster using a public IP address. The choice between using NodePort, ClusterIP, or LoadBalancer depends on your specific use case and requirements.

If your application needs to be accessible from outside the cluster and requires advanced traffic management capabilities, such as automatic scaling, health checks, and session persistence, then LoadBalancer is likely the best choice.

However, if your application does not need to be accessed from outside the cluster and you don’t require advanced traffic management capabilities, then using ClusterIP or NodePort may be sufficient.

It’s important to note that LoadBalancer type services may also incur additional costs, as they often require the use of a cloud provider’s load balancing service, which may be charged separately.

Let’s check all the fields of Service.yml:

apiVersion: v1
kind: Service
metadata:
  name: my-service # Name of the service
  labels:
    app: my-app
spec:
  type: LoadBalancer # Type of the service
  selector:
    app: my-app
  ports:
    - name: http
      port: 80 # Port exposed by the service
      targetPort: 8080 # Port of the pod that the service is directing traffic to
      protocol: TCP # Protocol used by the service

Enter fullscreen mode Exit fullscreen mode
apiVersion:

The version of the Kubernetes API to use for this object. In this case, it's v1, the core API version.
kind: The kind of object to create. In this case, it's a Service.

metadata:

Data that helps identify the service, such as the name and labels.name: The name of the service.labels: Key-value pairs that can be used to organize and select objects.

spec: The specification of the service.
type: The type of the service. In this case, it's LoadBalancer, which exposes the service externally using a cloud provider's load balancer.
selector: The labels used to select the pods that the service will route traffic to.
ports: The ports that the service will listen on and route traffic to.
  • name: The name of the port. This is optional, but can be useful for documentation purposes.
  • port: The port number that the service will listen on.
  • targetPort: The port number of the pod that the service will direct traffic to.
  • protocol: The protocol used by the service, which can be TCP, UDP, or SCTP. If not specified, TCP is used by default. Overall, the service YAML file defines a named service that listens on port 80 and routes traffic to pods with the label app: my-app. It is of type LoadBalancer, so it can be accessed externally using a cloud provider's load balancer.

Volumes
Volumes provide a way to store and share data between containers and pods. They provide persistent storage for containerized applications and allow data to survive container restarts and rescheduling.

ConfigMaps/Secrets
ConfigMaps and Secrets provide a way to store configuration data and sensitive information in the Kubernetes cluster. They can be used to store environment variables, command-line arguments, and configuration files.

Deployments
A deployment is a declarative way to manage a set of replicas of a pod template. It provides a simple way to roll out updates, rollbacks, and scaling of application replicas.

let’s understand with example of all deployment points:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:latest
        ports:
        - containerPort: 80

Enter fullscreen mode Exit fullscreen mode
apiVersion:

This field specifies the Kubernetes API version that the YAML file uses. In this example, we're using version apps/v1.

kind: This field specifies the kind of object we're creating. In this example, we're creating a Deployment object.
metadata:

This field contains metadata about the object, such as the name, labels, and annotations. In this example, we're giving the Deployment a name of nginx-deployment. Metadata is used to identify and label resources, as well as provide other relevant information about the resource.

spec: This field specifies the desired state of the object. In this example, we're specifying that we want 3 replicas of our nginx Deployment.
selector:

This field specifies the labels that should be used to match Pods to this Deployment. In this example, we're using the label app: nginx.

template:

This field contains the Pod template that should be used to create new Pods for this Deployment. In this example, we're specifying that the Pods should have a label of app: nginx.

metadata within template:

This field contains metadata about the Pod template, such as the labels and annotations. In this example, we're specifying that the Pods should have a label of app: nginx.

spec within template:

This field specifies the desired state of the Pods that will be created from this template. In this example, we're specifying that each Pod should have a single container, named nginx, and that it should use the nginx:latest Docker image. We're also specifying that the container should listen on port 80.

Ingress
Ingress is a Kubernetes object that provides a way to expose HTTP and HTTPS routes from outside the Kubernetes cluster to services within the cluster. It enables you to define rules for routing traffic to different services based on the request path, host, and other parameters

Conclusion
Kubernetes is a powerful and complex system for managing containerized applications at scale. In this article, we covered the basics of Kubernetes architecture, including Pods, Nodes, Clusters, and Services. We also explored different types of Services and how they are used for load balancing and service discovery.

With a solid understanding of these concepts, you should be well-equipped to begin working with Kubernetes and managing your own containerized applications.

Top comments (0)