The way organizations are using Kubernetes has quickly evolved in the past years. All the giant cloud providers offer managed Kubernetes services for their customers so that they can easily automate the deployment, scale, and manage their containerized applications.
But how do these platforms perform? Do they live up to the hype? How well do they integrate? What’s it like maintaining and working with them? That’s why in this article, we reviewed the Managed Kubernetes solutions from the top cloud providers: Amazon Elastic Kubernetes Service (EKS) from Amazon, Google Kubernetes Engine (GKE) from Google Cloud Platform and Azure Kubernetes Service (AKS) from Microsoft Azure.
It is best to get a deep understanding and look beyond the price and consider factors like scalability, security, features before making a final decision. We’ve also decided to group the different features available for each managed Kubernetes services in this blog. So, let’s dig in.
My Background: Cloud Engineer | AWS Community Builder | AWS Educate Cloud Ambassador | 4x AWS Certified | 3x OCI Certified | 3x Azure Certified.
General Overview
Amazon Elastic Kubernetes Service
Amazon Elastic Kubernetes Service (EKS) is a managed service publicly available in June 2018 to run Kubernetes on AWS. It can be integrated easily with all the services, apps, and protocols that run on a Kubernetes Environment.
EKS is designed entirely around Kubernetes, so everything you need to manage and deploy containers is included. Whether its seamless integration with the third-party tools for logs and performance metrics or advanced scaling capabilities.
EKS is an excellent option if you already have an established AWS cloud architecture and want to experiment with Kubernetes or looking forward to migrating workloads on different clouds
Azure Kubernetes Service
Azure Kubernetes Service (AKS) is a managed Kubernetes solution that was made available in 2018 by Microsoft. This is a fully managed service that makes containerized apps easy to deploy and manage in the Kubernetes environment.
AKS runs both on Azure Public Cloud, on-premises, which helps deliver mission-critical applications to customers. AWS also has Government Cloud support for Government and their partners to run sensitive workloads.
AKS is worth it when it comes to seamless integration with its tools, including Visual Studio and Active Directory and the rest of the Microsoft Cloud SaaS services. If you have an established enterprise agreement with Microsoft, and no preference for any other architecture, then AKS will perfectly suit your requirements.
Google Kubernetes Engine
Kubernetes itself started as a Google’s internal project, so it makes sense that they were the first to deliver managed Kubernetes solution in 2014, known as Google Kubernetes Engine(GKE).
Google Kubernetes Engine (GKE), as a managed production-grade container orchestration engine, is the most resilient and well-rounded Kubernetes offering when compared to AKS and EKS. It has support for the Istio service mesh out of the box, and gVisor for an extra layer of security between running containers. Also, one of the key benefits of GKE is that service upgrades and new versions are instantly available while other providers take time to provide update releases.
High availability of Clusters
High availability of clusters is crucial if you are running production critical applications on Kubernetes. It ensures your cluster will be available when something goes wrong. For instance, if your services rely on a single data center and it goes down, your services will not be interrupted.
GKE offers excellent support for highly available clusters in two modes: multi-zonal and regional. There is just one master node in multi-zonal mode, yet there can be many worker nodes across various zones. In regional mode, the master nodes are likewise spanned over all the regional zones to provide superior HA.
AKS doesn’t provide high availability for their master nodes, as of date. However, the nodes are deployed in Availability Zones, for greater availability.
EKS likewise provides HA across both workers, and master nodes spanned over different accessibility zones in the same way as GKE’s.
SLA
SLA (service level agreement) is a powerful acronym in all industries, and it is no different within the cloud community.
All cloud platform providers offer different SLA’s, which guarantees different uptimes according to their availability zones and their region of deployment. For example, Amazon EKS guarantees 99.95% uptime, AKS offers 99.95% when availability zones are enabled, and 99.9% when disabled, andGKE splits its managed Kubernetes clusters, offering 99.5% uptime for Zonal deployments and 99.95% for regional deployments.
Differences in SLA for Kubernetes control planes also present another area to compare. A Kubernetes control plane is a management infrastructure implemented by the cloud provider to efficiently perform all the essential processes for running your worker nodes.
It varies by different cloud providers. AKS Control Plane comes free of cost, and you do not pay anything for it. GKE was also free of cost initially, but they have announced they will start charging soon for the control plane. EKS initially charge for their control plan right from the beginning.
Resource Availability with Node Pools
For different types of workflows, different kinds of machines are allocated to clusters by node pools. For example, storage systems require better storage disks than workflows like visual data analysis, requiring a better CPU and GPU. With node pools, we can provide the best resource available for specific nodes and provide optimal performance on those nodes while not allocating resources to all the cluster nodes.
GKE and EKS are leading in this since they both provide functionality for node pooling. But AKS, on the other hand, does not provide node grouping and recommends different clusters in different scenarios.
Scalability
GKE, AKS, and EKS all provide you the ability to scale up nodes very quickly, just by using the UI. In autoscaling, GKE is leading as the most mature solution available on the interface. What a user needs to do is just specify the desired VM size and the range of nodes in the node pool. And the rest of the steps are managed by Google Cloud. EKS and AKS come after GKE in auto-scaling because they need some manual configurations to setup.
GKE and AKS also provides further customization in its ability to scale up. Unlike EKS. In GKE and AKS, you can configure to a cluster using the Cluster Autoscaler, which will scale your nodes up or down based on the required workload. That is especially helpful when you have to run short-lived processes. EKS can also implement auto-scaling policies, but you have to set them manually compared to GKE and AKS, which provides Cluster Autoscaler by default.
Bare Metal Clusters
As understandable by name, Bare metal clusters are deployed on a cloud architecture with no virtualization layer in between, in other words, no VMs. There are various benefits of this technique. The infrastructure overhead reduced drastically, which provide access to more computing and storage resources for application deployments. Access to more computing resources also increases the computing power, which helps reducing latency and downtimes for a particular application request.
Coming to GKE vs AKS vs EKS for bare-metal performance. EKS allows the use of bare metal nodes. GKE and AKS do not support bare metal nodes. Although EKS does not default to bare metal nodes as they are expensive to deploy.
Resource Limits
Resource limits are handled in a different way across providers — limits are per account with EKS, where AKS handle limits per subscription, and GKE balances limits on a per-project basis. EKS offers a maximum of 100 nodes per cluster, per account. AKS offers 500 nodes per cluster, and GKE offers 5000 nodes per cluster per project.
While most limits look clear on paper, some are not. In AKS, for example, the maximum number of nodes that you can have depends on whether the node is available in State Set or Availability Set. On the other hand, In EKS, the maximum number of nodes per cluster you can get varies on the node’s instance type. Whereas GKE provides you more highly available nodes without any location variables.
Resource monitoring
In terms of resource monitoring, all three cloud providers have offerings. GKE uses Stackdriver for resource monitoring within their Kubernetes cluster. It monitors the master and worker nodes, and all Kubernetes components inside the platform along with logging. AKS offers Azure Monitor to evaluate the health of a container and Application Insights to monitor the Kubernetes components. EKS requires the use of third-party tools and recommends Prometheus for resource monitoring.
Role-based access control (RBAC)
Role-based access control (RBAC) in Kubernetes allows admins to configure dynamic policies to deny unauthorized access. All the three hosted services providers provide RBAC implementations, but they set it differently. EKS has a slight advantage, with a tighter security policy overall, as it considers RBAC and pod security policies mandatory compared to GKE and AKS.
Availability as a Cloud Provider
All three providers have their offerings available in most regions globally. Google Cloud has the best availability among these three globally. It has services in almost every region following the lead is Azure. Azure comes above AWS after launching services in Latin America and Africa,
Also, EKS is not available in the AWS government cloud; AKS, however, has one Azure government cloud, whereas Google has no government clouds.
Secure Image Management
All three cloud providers offer container image registry services that provide secure image management and stable build creation. But the degree of control these cloud provider provides varies.
The image signing feature of Azure Container Registry (ACR) provides users with the ability to check their container images’ authenticity. In the same way, immutable image tags in Elastic Container Registry (ECR) allows users to create a secure container builds at all times.
Lastly, Binary Authorization in GKE prevents deployment of images conflicting with the set policies and triggers the automatic lock-down of those risky images.
Elastic Container Registry and Azure Container Registry also support resource-based permissions for access controls on a repository level to prevent unauthorized access, which Google container registry does not.
Pricing
Each vendor has its specific features, limitations, and pricing plans. Management and deployment of clusters that include master and worker machines running provided at no cost by GKE and AKS. You are charged only for services that you use on the go, such as bandwidth, storage, and virtual machines. In comparison, Amazon EKS costs you $0.10 per hour for each deployed cluster other than the instances and services you are using.
Concerning the price overall, here are some rough figures to help you determine costs when choosing a Kubernetes platform. This cost comparison assumes that you have 20 worker nodes, and each node has 80 CPU and 320GB of RAM. Billable hours: 14,600 hours per month.
Final Words
AWS, Microsoft Azure, and Google Cloud Platform all are claiming to be the best Managed Kubernetes solution from the past years. It’s on you. Whether you want the advantage of the most mature and budget product of Google or you want to leverage your Microsoft Enterprise Agreement to get better pricing and support on Azure, or you want to make your transition to the cloud easier with EKS on Amazon.
To find that, its always important to compare storage, network, and compute features for each provider before you decide for a managed Kubernetes service. It is also critical to compare the costs since services can vary between regions and are different for each configuration.
Completely testing service’s features and capabilities in your environment will ultimately provide the real-time pricing and performance metrics, which will help determine a Kubernetes offering that perfectly suits your business needs.
Hope this guide helps you understand the Different Managed Kubernetes Services: EKS vs. AKS vs. GKE, feel free to connect with me on LinkedIn.
You can view my badges here.
If you are interested in learning more about AWS then follow me on github.
If you liked this content then do clap and share it . Thank You .
Top comments (3)
Two mistakes:
Also, another thing that differentiates GKE is the one-click deployment of common workloads from an application library that resembles Helm, you can deploy a stack that runs WordPress (with mysql) with one click on GKE.
Also, GKE provides service mesh (istio) as part of setup.
Both GKE and AKS nowadays default to private clusters (nodes do not have public ips) as part of better security defaults. EKS seems to still offer public nodes by default.
Thank you @andreidascalu .
I will take a look and update the post.
This is a useful comparison. We have on Devopedia an article on devopedia.org/google-kubernetes-en...
After reading this article, we're inspired to create articles on EKS and AKS. Thanks.