DEV Community

Cover image for Hardening Cluster Security in Google Kubernetes Engine
Michael Mekuleyi
Michael Mekuleyi

Posted on

Hardening Cluster Security in Google Kubernetes Engine

Introduction

This technical article is a detailed continuation of my talk at DevFest Ikorodu, where I spoke extensively about key security concepts in Kubernetes and how to build a truly secure cluster network. In this article, I will highlight security concepts that are wholly focused on Google Kubernetes Engine (GKE) on the Google Cloud Platform, I will discuss security policies native to Google Cloud and particularly to Kubernetes, I will also discuss container-optimized images and some security practices that run at deploy-time. This article requires some knowledge of Kubernetes, Google Cloud, and a passion for building secure systems.

Understanding the Shared responsibility model

Google's ideology towards building secure systems is called the Shared Responsibility Model, the shared responsibility model details that the security of your workloads, networks, and data is a joint liability between Google and the Client (You). As regards the Google Kubernetes Engine, Google has a responsibility to secure the master control plane and its components like the API server, etcd database, and controller manager while the user is responsible for securing nodes, containers, and pods. Across IaaS, PaaS, and SaaS, Google properly defines its responsibility and also the Client's responsibility. This is clearly shown in the diagram below

Google's Shared Responsibility Model

Key Security Practices in Kubernetes on Google Cloud

Google Cloud provides a series of services, policies, and configurations that strengthen authentication and authorization across Kubernetes networks, data systems, and workloads. A majority of these policies are configurable and this makes it the whole responsibility of the client to ensure that their cluster has the appropriate security policy in use. In this article, we will focus briefly on the following security concepts,

  • Network Policies
  • Shielded GKE nodes
  • Container-optimised OS images
  • Binary Authorization
  • Private Clusters

Network Policies

By default all pods in Kubernetes can communicate with each other, however, Kubernetes provides access to objects that can limit inter-pod communication. These Kubernetes objects are called network policies. Kubernetes network policies allow you to specify how a pod can communicate with various network entities based on pods matching label selectors or specific IP addresses with port combinations. These policies can be defined for both ingress and egress.

GKE provides the option to enforce the use of a network policy when a cluster is created to ensure that inter-pod communication is controlled. You can easily configure this by running the following command,



# Enforce network policy on new clusters 
gcloud container clusters create <cluster-name> --enable-network-policy

# Enforce network policy on existing clusters
gcloud container clusters update <cluster-name> --update-addons=NetworkPolicy=ENABLED


Enter fullscreen mode Exit fullscreen mode

An example of a simple network policy can be found here

Shielded GKE nodes

Google also provides Shielded GKE nodes that increase cluster security by providing strong verifiable node identity and integrity. Google Cloud simply uses Shielded Compute Engine virtual machines as Kubernetes cluster nodes. These virtual machines cannot be compromised at the boot or kernel level, because they use virtual Trusted Platform modules and also secure boot. Shielded VMs enforce and verify the signature of all the components at the boot process to make sure that individual components and modules in the VMs are safe and secure.

Shielded GKE nodes prevent attackers from impersonating nodes in a cluster in the event of a pod vulnerability being exploited.

You can enable Shielded nodes in new/existing clusters with the following commands,



# Enable Shielded GKE nodes on new cluster
gcloud container clusters create <cluster-name> --enable-shielded-nodes

# Enable Shielded GKE nodes on existing cluster
gcloud container clusters update <cluster-name> --enable-shielded-nodes

# Verify that Shielded GKE nodes are enabled (check for enabled under shieldedNodes as true) 
gcloud container clusters describe <cluster-name>


Enter fullscreen mode Exit fullscreen mode

There is no extra cost for using Shielded nodes in GKE, however, they generate more logs which will generally lead to an overall increase in cost.

Container Optimised OS images

Container-optimized OS (also known as cos_containerd image ) are Linux-based kernel images provided by Google to run secure and production-ready workloads. They are optimized and hardened specifically for running enterprise workloads. They continuously scan for vulnerabilities at the kernel level and patch and update any package in case of a vulnerability. Their root filesystem is always mounted as read-only, this prevents attackers from making changes to the filesystem. They are completely stateless, however, can be customized to allow for writes on a specific directory.

Binary Authorization

Binary Authorization ensures that only trusted containers are deployed to clusters in GKE. It is a deploy-time security service provided by Google. Binary Authorization has seamless integration with Container analysis, a GCP service that scans container images stored in the Container registry for Vulnerabilities. Binary Authorization comprises one or more rules before the image is allowed to be deployed in the cluster. Binary authorization also ensures that only attested images are deployed, an attested image is an image that is verified by an attestor. At the time of deployment, Binary Authorization enforces the use of an attestor to verify the attestation. Any unauthorized image that does not match the Binary Authorization policy is rejected and will not be deployed.

The following commands will enable binary authorization on GKE clusters,



# Enable binary authorization on a new cluster 
gcloud container clusters create  CLUSTER_NAME --binauthz-evaluation-mode=PROJECT_SINGLETON_POLICY_ENFORCE --zone ZONE

# Enable binary authorization on an existing cluster
gcloud container clusters update CLUSTER_NAME --binauthz-evaluation-mode=PROJECT_SINGLETON_POLICY_ENFORCE --zone ZONE


Enter fullscreen mode Exit fullscreen mode

Even though binary authorization prevents unauthorized images from being deployed, you can specify the break the glass flag as an annotation in the pod deployment to allow the pods to be created even if the image violates the Binary authorization policy. The following is an example of a pod specification that uses the break-glass annotation,



apiVersion: v1
kind: Pod
metadata:
name: my-break-glass-pod
annotations:
alpha.image-policy.k8s.io/break-glass: "true"

Enter fullscreen mode Exit fullscreen mode




Private Clusters

GKE Private Clusters are used to isolate the network connectivity of a cluster to the public internet, this includes both inbound and outbound traffic. This is possible because the nodes in the cluster will have only an internal private IP address and no public-facing IP address. If nodes require outbound internet traffic then a managed Network Address Translation gateway will be used. For inbound internet access, external clients can reach the applications inside the cluster through Kubernetes service objects. Such services should be of the type NodePort or LoadBalancer. Since inbound or outbound traffic is not allowed, it will be impossible to use public docker containers to deploy images in clusters. To access public containers, it is advised that you create a Cloud NAT gateway or upload the images to a private Container Registry and then point your cluster to them.

The level of access to a private cluster via endpoints can be controlled through any of the following configurations;

  • Public endpoint access disabled
  • Public endpoint access enabled; authorized networks enabled for limited access
  • Public endpoint access enabled; authorized networks disabled

The following code snippets will enable you to create private clusters in any of the aforementioned configurations,


Public Access Disabled

gcloud container clusters create my-private-cluster --create-subnetwork name=my-subnet --enable-master-authorized-networks --enable-ip-alias --enable-private-nodes --enable-private-endpoint --master-ipv4-cidr 172.20.4.32/28

Public endpoint access enabled; authorized networks enabled for limited access

gcloud container clusters create my-private-cluster-1 --create-subnetwork name=my-subnet-1 --enable-master-authorized-networks --enable-ip-alias --enable-private-nodes --master-ipv4-cidr 172.20.8.0/28

Public endpoint access enabled; authorized networks disabled

gcloud container clusters create my-private-cluster-2 --create-subnetwork name=my-subnet-2 --no-enable-master-authorized-networks --enable-ip-alias --enable-private-nodes --master-ipv4-cidr 172.20.10.32/28

Enter fullscreen mode Exit fullscreen mode




Conclusion

In this article, I discussed in detail the active steps you can take to secure your cluster and ensure that your Kubernetes workloads are safe and secure. If you enjoyed reading this article, kindly follow me on Twitter. You can also like and share this article with anyone interested in learning about securing their GKE clusters. Thank you and be safe!

References

Top comments (0)