Google Cloud Platform (GCP) offers three container abstractions (or managed container services) to deploy and manage your applications:
Each of the above offers different features and is “managed” to varying degrees. Making them suitable for different applications and teams. This post lists all the differences between them to help you decide which of the three is the most suitable for your application.
A quick introduction
Cloud Run | GKE Standard | GKE Autopilot |
---|---|---|
Fully managed All you need to give Cloud Run is a container, and it will take care of deploying, securing, autoscaling and monitoring. |
Managed Kubernetes Control Plane You manage the nodes yourself. |
Fully-managed Kubernetes Both the Kubernetes Control Plane and the node. |
Feature comparison
Deployment
Features | Cloud Run | GKE Standard | GKE Autopilot |
---|---|---|---|
Automated deployments | Managed | Self-managed | Self-managed |
HTTP load balancing | Self-managed | Self-managed | Managed |
Language support | All | All | All |
Operating system | List of supported images | List of supported images | Only supports in-house Linux with Containerd (Can’t use Red Hat Enterprise Linux (RHEL), Linux with Docker, or Windows Server) |
Stateful apps | Not supported | Supported | Supported |
Daemon workload type in Kubernetes | NA | Supported | Supported |
Altering or adding new resources in the namespace | NA | Supported | Not supported |
Nodes and node pools
Features | Cloud Run | GKE Standard | GKE Autopilot |
---|---|---|---|
Node provisioning, node pools & setting cluster size: 1. Calculating what compute capacity your workload required 2. Choosing the size (CPU + RAM) of your nodes basis the required compute capacity for your workload 3. Choosing the size of your cluster which will house these nodes |
Managed | Self-managed: You manually provision additional resources and set the overall cluster size. |
Managed: Dynamically provisions resources based on your Pod specification |
Resource requests | Limited granularity of CPU and memory | Flexible CPU and memory sizes | CPU resources must use increments of 0.25 CPUs. Autopilot automatically adjusts your requests to round up to the nearest 250m. For example, if you request 800m CPU, Autopilot adjusts the request to 1000m (1 vCPU). The ratio of pod vCPU and memory should be in the range of 1:1 to 1:6.5. If this ratio is outside the allowed range for your selected compute class, Autopilot automatically increases the smaller resource. This impacts small services that will likely be overscaled to match this ratio. Limits are set to the same values as requests |
Pod bursting: Ability to configure pods to burst into unused capacity on the node |
Not supported | Supported |
Not supported. Since all Pods have limits set on requests, resource bursting is not possible. It is important to ensure that your Pod specification defines adequate resources for the resource requests and does not rely on bursting. |
Pod affinity and anti-affinity | Not supported | Supported | Limited support |
Changing nodes: Such as changes to the underlying machine type if your workloads have specific compute requirements |
Not supported | Supported | Not supported |
[Node auto-repair] (https://cloud.google.com/kubernetes-engine/docs/how-to/node-auto-repair): Liveness and Readiness health checks |
Managed | Self-managed | Managed |
Node auto-upgrade | Managed | Self-managed | Managed |
Maintenance windows | Managed | Self-managed | Managed |
Surge upgrades | Managed | Self-managed | Managed |
Auto scaling
Features | Cloud Run | GKE Standard | GKE Autopilot |
---|---|---|---|
Scale to 0 | Supported | Not supported | Supported |
Cloud Run automatically scales to the number of container instances needed to handle all incoming requests. | |||
Node auto-provisioning | NA | Self-managed | Managed |
Cluster autoscaling | NA | Self-managed | Managed |
Horizontal pod autoscaling (HPA) | NA | Self-managed | Self-managed |
Vertical Pod autoscaling (VPA) | NA | Self-managed | Self-managed |
Networking
Features | Cloud Run | GKE Standard | GKE Autopilot |
---|---|---|---|
VPC networks | Managed |
Self-managed: VPC-native traffic routing for public and private clusters |
Managed: VPC-native traffic routing for public and private clusters |
Intranode visibility | NA | Self-managed | Managed |
Private networking | Self-managed | Self-managed | Self-managed |
Cloud NAT | Self-managed | Self-managed | Self-managed |
Authorized networks | No supported | Self-managed | Self-managed |
Integrations & add-ons
Features | Cloud Run | GKE Standard | GKE Autopilot |
---|---|---|---|
Helm charts | Not supported | Supported | Not supported |
Support for Calico network policy | NA | Supported | Not supported |
Cloud Build integration | Supported | Supported | Not supported |
Logging | Managed | Managed | Managed |
Monitoring | Managed | Managed | Managed |
External monitoring tools | Supported | Supported |
Not supported Since most external monitoring tools require access that is restricted |
Configuring 3rd party storage platforms | Not supported | Supported | Not supported |
Configuring 3rd party network policies | Not supported | Supported | Not supported |
Security
Features | Cloud Run | GKE Standard | GKE Autopilot |
---|---|---|---|
Shielded nodes, Workload Identity, Secure boot | NA | Self-managed | Managed |
Customer-managed encryption keys (CMEK) | Self-managed | Self-managed | Self-managed |
Access Control | Self-managed: IAM | Self-managed: RBAC & IAM | Self-Managed: RBAC & IAM |
Application-layer secrets encryption | Self-managed | Self-managed | Self-managed |
Container threat detection | Not supported: Container Threat Detection supports only Container-Optimized OS node images. | Self-managed | Not supported |
Binary authorization | Self-managed | Not supported | Supported |
SLA | Service SLA | Control plane SLA | Control plane and pod SLA |
Other features
Features | Cloud Run | GKE Standard | GKE Autopilot |
---|---|---|---|
Billing | Pay per use | Pay per node (CPU, memory, boot disk)Users pay the assigned amount regardless of the node resources used. | Pay per use (CPU, memory, and ephemeral storage)Users pay only for the CPU, memory, and storage used. |
Interoperability | You can use GKE Standard with or without Cloud Run | You can use GKE Standard with or without Cloud Run | You cannot convert from Standard clusters to Autopilot clusters or vice versa |
What should you choose?
Go with Cloud Run when:
- You don’t want to use Kubernetes
- AND you want to run stateless containerized microservices
- AND app devs are going to manage the build and deploys of your microservices (No ops)
- AND there are minimal third party dependencies like clickhouse or grafana etc.
Go with GKE Standard when:
- You need to run your services on Kubernetes
- AND you need advanced scalability and configuration flexibility to orchestrate your containers, such as
- Number of containers
- CPU and memory
- Networking
- Observability
- Security
- Highly compute-intensive workloads that require high-performance compute platforms
- Stateful applications, cronjobs etc.
- AND you have a dedicated devops expertise to set up and manage your GKE.
Go with GKE Autopilot when:
- You do not need the flexibility offered by GKE Standard
- AND if your workloads meet Autopilot constraints
- AND you have a dedicated devops expertise to set up and manage your GKE.
In general, use the highest level of abstraction when you're starting out. This enables you to use the most optimal technologies without having to learn them in detail. Over time, as you build your knowledge and as your use-cases become more complex, you can start moving to less abtracted offerings.
Argonaut is an orchestration layer above your own cloud account (AWS/GCP). Its developer-experience helps you get started with GKE (standard) deployments offering full flexibility yet simple to use, in minutes. It's fully customizable and quickly integrates with a large ecosystem of third-party tools like Helm-charts, Datadog, Redis, and CloudSQL. Get started with Argonaut now.
Top comments (2)
This isn't necessarily a reason to pick cloudrun over the others, they are all well equipped for this. I'd also say the operational burden of k8s is a bit overstated, but it's always important to look at the experience of the team and what requirements you will have on your system over the longer term.
That is correct. Having containerized apps is not a reason to pick cloudrun but it is a necessary prereq.
The k8s burden is definitely team capability specific. A lot of the companies Argonaut deals with are startups. Something we increasingly see is that these teams do not want to deal with k8s or any infra related engineering so that they can focus on product validation and business value (rightly so? We might be biased here :-) ). As you can imagine, these are teams which do not have k8s expertise and the entire space can be daunting without a gentle intro.
Many of these startups do benefit from having the right infra primitives from day 1 and this post hopes to guide them along different options in that space.