Hey everyone, I hope you're doing great, today we'll be discussing what's there in the newly released Kubernetes 1.27 update. The kubernetes updates is very unique unlike other cloud native tools, the versioning of k8s follow the semantic versioning terminology as X.Y.Z where,
X -> Major version(major updates)
Y -> Minor version(minor updates, this is released every year)
Z -> Patch version(small bug fixes or performance improvement)
The latest kubernets release is packed with lot of changes including new features, API changes, cleanups, bug fixes and improved documentation.
Kubernetes releases happens 3 times every year.
The Kubernetes has a tradition of selecting a theme for every major release, the theme for Kubernetes 1.25 was Combiner which signifies the importance of individual components for building the project you see today.
The Kubernetes 1.26 had the theme Electrifying which signifies the diversity of compute resources taking into account for building the kubernetes while creating awareness on the importance of taking the energy consumption footprint into account.
The theme for the current version is Chill Vibes which reflects the calmness of the 1.27 release.
This release is the first release of 2023 which includes 60 enhancements, kubernetes has multi-stage feature relase i.e. alpha, beta, GA or stable. The current release has 9 enhancements which has been promoted to production ready. You can check more about them here.
Kubernetes release comes with enhancements update in a categorized form viz. API change, Feature, Bug or Regression, failing test and other cleanups.
We have a big list of enhancements in Kubernetes 1.27 as listed in release notes but here we'll be diving into only few of the important enhancements.
As you might be already familiar that Kubernetes relies on custom image registry i.e. gcr.k8s.io which was hosted by Google has been frozen. The new registry i.e. registry.k8s.io has been generally available for several months will be available and is controlled by the community itself.
Google had announced to renew its donation of $3 million and also Amazon has announced a matching donation with Google during the keynote at KubeCon Detroit '22. The new registry will be hosted by Google, Amazon and several other cloud providers, this will bring several benefits including faster downloads, reduced bandwidth costs, etc.
If you're a maintainer then you will need to update your kubernetes manifests and helm charts to the new registry. You can learn more about it here!
Container security is a crucial aspects for any Kubernetes cluster, without proper security measures container might be vulnerable for attacks that can compromise the entire cluster.
Seccomp short for Secure Computing Mode is a linux kernel feature that restricts the system calls that a process can make. In Kubernetes Seccomp can be used to enhance the security of the containers by limiting their certain privileged operations, so basically Seccomp profile is a set of rules specifies which container is allowed to make system calls reducing the attack surface by limiting the available kernel interfaces. Kubernetes allows you to make a custom Seccomp profile for specifying the system calls or you can use the default Seccomp profile provided by the container runtime for a secure baseline configuration.
However this was disabled by default, now we can enable it by default giving the kubernetes an extra layer of security. Kubernetes has the option to enable default Seccomp profile using the default commandline flag
kubelet --seccomp-enable where this(default Seccomp profile by runtime) can be used for every node you create inside the cluster.
In this update there is a new API available for accessing the Node Log, a node is control plane or worker machine that's part of the Kubernetes cluster, a Node log is the log data generated by a particular node. A node log can be helpful for identifying the issue with the services running inside the node. A Cluster Administrator may find challenging to identify the problem, typically they need to SSH or RDP into the node to examine the service logs and diagnose the issue.
With the 1.27 release the node log query feature simplifies this process by enabling the administrator to access logs using the Kubectl, this is helpful when working with windows nodes as the problems like CNI's configurations and other hard to detect issues can prevent containers from starting up.
To utilize the node log query feature it's important to enable the node log query feature date for the relevant node and ensure that both the enable system log Handler and the enable system lock query options are set to true in the cubelet configuration, once this requirements are met, you can retrieve the node logs. For instance you can retirieve the Kubelet service logs using the following example
kubectl get --raw "/api/v1/nodes/node.example/proxy/logs/?query=kubelet". Follow the documentation here to query all node logs here.
Kubernetes parallel jobs refer to workloads that enable multiple pods to run concurrently in order to complete a task. These parallel jobs are commonly used for computationally intensive tasks or batch processing, where the workload can be divided into smaller pieces and executed in parallel to reduce processing time.
When running parallel jobs in Kubernetes, it is often necessary to impose specific constraints on the pods. For example, all pods may need to run in the same availability zone or on certain types of hardware, such as GPU model X or Y, but not a mixture of both. To achieve this, Kubernetes utilizes a suspend field that allows custom queue controllers to determine when a job should start. When a job is suspended, it remains idle until the custom queue controller decides to unsuspend it, taking into account various scheduling factors. However, once a job is unsuspended, the actual placement of pods is handled by the Kubernetes scheduler, and the custom kube controller has no influence over where the pods will be allocated.
This is where the new feature of mutable scheduling directives for jobs comes into play. This feature enables the updating of a job's scheduling directives before it begins. Essentially, it allows custom queue controllers to influence pod placement without needing to directly handle the assignment of pods to nodes themselves. To learn more about this check out the Kubernetes Enhancement Proposal 2926.
Persistent volumes in Kubernetes offer various access modes that you may already be familiar with. These modes include "Read-Only Many," where the volume can be mounted as read-only by multiple nodes; "Read-Write Many", allowing the volume to be mounted as read-write by multiple nodes; and "Read-Write Once", which permits the volume to be mounted as read-write by a single node. However, it does allow multiple pods to access the volume when those pods are running on the same node.
A recent addition to Kubernetes, starting from version 1.22, is the introduction of a new access mode called "Read-Write Once Pod". This access mode restricts volume access to a single pod within the cluster. This approach ensures that only one pod at a time can write to the volume, making it particularly beneficial for stateful applications that require exclusive access to storage. More details about this feature can be found in the provided link.
As of Kubernetes 1.27 and later, the "Read-Write Once Pod" beta feature is enabled by default. It's important to note that this feature is exclusively supported for CSI (Container Storage Interface) volumes. To enable this feature, simply include the "ReadWriteOncePod" mode when creating the Persistent Volume Claim (PVC).
Overall, the addition of the "Read-Write Once Pod" feature to Kubernetes provides enhanced control over volume access, catering to the needs of stateful applications that rely on exclusive storage access.
This is it for now, I have only discussed few of them but you can learn more about other enhancements here.
For more insights and updates, feel free to follow me on Twitter. Additionally, you can find more of my articles on Hashnode and Dev Community. Stay connected for further discussions on Kubernetes, cloud computing, and other exciting topics.