NEW: 2024 comparison out HERE.
Update: An addendum exploring Spot VM pricing is here.
Last year I published a performance and price comparison of the compute offerings of several cloud providers. It started as a comparison of various Google GCP instance types, to see which were the best for running our (mainly Perl-powered) web backend at SpareRoom. Out of pure curiosity, I expanded the analysis to include an additional 6 major Cloud providers.
I am returning a year later with a big update. I will follow the same methodology, mainly a custom portable Perl benchmark (which seems a good indicator of generic CPU performance, especially in tasks similar to what a web service would run) and the "classic" Geekbench 5 - mostly so you have a commonly available frame of reference. Otherwise, these are the major changes for this year's comparison:
- More cloud providers: I tried to make it into a "Top 10 Cloud providers" list by adding Oracle Cloud, IBM Cloud and OVHCloud, which were frequently mentioned in such lists.
- New CPUs: This year's round-up includes Amazon's Graviton3 and Intel's Sapphire Rapids, making their debut.
- Strictly 2x vCPU instances: Last year, I discovered a performance issue with AMD EPYC instances on GCP that caused performance anomalies and required separate evaluation of 2/4/8 x vCPU instances. With the issue solved, I am simplifying the comparison.
- No burstable types: I have excluded burstable types to ensure a more apples-to-apples comparison. Since the benchmarks are the same, you can still refer to the previous comparison to look up the relative performance of burstable instance types.
- N. America and UK/EU regions only: Finally, I will only include types that are available in N. America or Europe regions. This is because China regions have very limited connectivity to the "outside" world and other Asian and S. American regions may have different pricing making price comparisons uneven.
As the article is quite extensive, here are some quick links to jump ahead if needed:
Single-thread Performance
Multi-thread Performance & Scalability
Performance / Price (On Demand)
Performance / Price (1-Year reserved)
Performance / Price (3-Year reserved)
Conclusions
And for last year's article with additional information on the methodology and inclusion of burstable VMs, click here.
The contenders (2023 edition)
I will try to cover all relevant non-burstable public VM instances for the 10 Cloud providers below. I will focus on 2x vCPU instances, as that's the minimum scalable unit for a meaningful comparison (and generally minimum for several VM types), given that most AMD and Intel instances use Hyper-Threading. So, for those systems a vCPU is a Hyper-Thread, or half a core, with the 2xvCPU instance giving you a full core with 2 threads. This will become clear in the scalability section.
Other than that, I am targeting 2GB/vCPU of RAM and 30GB SSD (not high-IOPS) boot disk for the price comparison. Not all instances can be configured this way, and such exceptions are noted in the tables below.
The prices are on-demand (sometimes also listed as pay as you go or hourly) with 100% sustained discounts where available. US region pricing is preferred whenever available (it's usually the cheapest - otherwise CA or UK/EU region is selected).
For providers that offer 1 year and 3 year committed/reserved discounted prices, the no-downpayment price was listed with that option. The prices were valid for April 2023 - please check for current prices before making final decisions.
As most of the CPUs are from Intel or AMD, here is an overview of the various generations from older (top) to newer (bottom), roughly grouped in performance tiers, based on last year's comparison results (and adding Sapphire Rapids):
Amazon Elastic Compute Cloud (EC2)
Instance Type | CPU type | CPU GHz/ RAM/SSD | Price $/Month | 1Y Res. $/Month | 3Y Res. $/Month |
---|---|---|---|---|---|
c6g.large | Amazon Graviton2 | 2.5/4/30 | 52.04 | 33.64 | 23.86 |
c5a.large | AMD Rome | 3.3/4/30 | 58.61 | 38.17 | 26.49 |
c6i.large | Intel Ice Lake | 2.9/4/30 | 64.45 | 43.45 | 29.72 |
c6a.large | AMD Milan | 1.95/4/30 | 58.24 | 39.34 | 26.99 |
c7g.large | Amazon Graviton3 | 2.6/4/30 | 55.18 | 37.29 | 25.61 |
Amazon Web Services (AWS) need no introduction, they pretty much started the whole "cloud provider" business - even though smaller connected VM providers predated it significantly (e.g. Linode comes to mind) - and still dominate the market. The AWS platform offers extensive services, but, of course, we are only looking at their EC2 offerings for this comparison.
The big change from last year is the availability of instances (c7g) based on their new Amazon Graviton3 ARM CPU. Their Graviton2 instances were their most appealing offering from a price/performance POV last year, so it will be interesting to see how the next generation of the CPU works out.
With EC2 instances you generally know what you are getting (instance type corresponds to specific CPU), although there's a multitude of ways to pay/reserve/prepay/etc which makes pricing very complicated, and pricing further varies by region (I used the lowest cost US regions). In the 1Y/3Y reserved prices listed, there is no prepayment included - you can lower them further if you do prepay.
Google Compute Engine (GCE)
Instance Type | CPU type | CPU GHz/ RAM/SSD | Price $/Month | 1Y Res. $/Month | 3Y Res. $/Month |
---|---|---|---|---|---|
n2d-c2-4096 | AMD Milan | 2.45/4/30 | 45.78 | 35.08 | 25.91 |
t2d-s2 | AMD Milan | 2.45/8/30 | 64.68 | 41.86 | 30.76 |
c2d-hcpu2 | AMD Milan | 3.05/4/30 | 57.72 | 37.47 | 27.62 |
t2a-s2 | Ampere Altra | 3.0/8/30 | 59.21 | ||
n2-c2-4096 | Intel Cascade L | 2.8/4/30 | 52.15 | 39.87 | 29.34 |
n2-c2-4096 | Intel Ice Lake | 2.6/4/30 | 52.15 | 39.87 | 29.34 |
e2-c2-4096 | Intel Broadwell | 2.2/4/30 | 43.38 | 28.44 | 21.17 |
n1-c2-4096 | Intel Skylake | 2.0/4/30 | 45.99 | 39.87 | 29.34 |
c2-s4 /2* | Intel Ice Lake | 2.7/8/30 | 63.99 | 51.01 | 33.49 |
c3-hcpu4 /2* | Intel Sapphire R | 2.7/4/30 | 65.91* | 45.40* | 31.35* |
* Extrapolated 2x vCPU price - type requires 4x vCPU minimum size.
The Google Cloud Platform (GCP) follows AWS quite closely, providing mostly equivalent services, but lags in market share (3rd place, after Azure). GCP does have an interesting compute cloud offering, in that it offers a wide variety of instance types, but is a bit convoluted at the same time in that in the end you are not sure which type to choose. This confusion is what had me benchmark all their instance types to make the optimal deployment decisions. At least the configuration of the instances itself is a bit better than Amazon's (easier to configure and more customizable for most types) in my opinion.
Last year their AMD offerings, which were the best value, gave me significant woes, as my testing showed an obvious performance bug which meant instances with few cores were underperforming. This is fixed now and, in addition, c2d instances seem to have increased clock speeds to make the AMD offerings even more interesting. On the Intel side, the latest Sapphire Rapids-based c3 instances became available recently. Google even added ARM offerings (t2a), although, compared to the other types, they seem a bit pricey and with no committed-use discounts they don't seem price competitive.
GCP prices vary per region, change quite frequently and feature some strange patterns. For example, right now t2d instances which give you a full AMD EPYC core per vCPU and n2d instances which give you a Hyper-Thread (i.e. HALF a core) per vCPU have the same vCPU price if you commit to at least 1 year. Also, some instance types can come with a variety of CPUs and by default you will get the oldest gen. However, e.g. for an n2d you can specify min_cpu_platform="AMD Milan"
to get a 30% faster machine (EPYC 3rd gen) for the same price (it's on the GCP console as a choice too). I will not even benchmark the older AMD Rome
versions (EPYC 2nd gen) this time around - you should be avoiding them altogether. Similarly, an n2 will benefit from min_cpu_platform = "Intel Ice Lake"
.
Note that c2 and c3 types have a 4x vCPU minimum. This breaks the price comparison, so I am extrapolating to a 2x vCPU price (half the cost of CPU/RAM + full cost of 30GB SSD). GCP gives you the option to disable cores (you select "visible" cores), so while you have to pay for 4x vCPU minimum, you can still run benchmarks on a 2x vCPU instance for a fair comparison.
Also, while on the graphs I list n2d-s2, n2-s2 etc, the standard configuration actually comes with 8GB, so I had to specify a custom config with 4GB.
Microsoft Azure
Instance Type | CPU type | CPU GHz/ RAM/SSD | Price $/Month | 1Y Res. $/Month | 3Y Res. $/Month |
---|---|---|---|---|---|
D2pls_v5 | Ampere Altra | 3.0/4/32 | 51.36 | 31.65 | 21.26 |
F2s_v2 | Intel Cascade L | 2.6/4/32* | 64.16 | 38.90 | 25.04 |
D2ls_v5 | Intel Ice Lake | 2.8/4/32 | 63.6 | 38.98 | 25.99 |
D2as_v5 | AMD Milan | 2.45/8/32 | 64.32 | 39.40 | 26.26 |
* Come with an extra 8GB/vCPU Temp storage
Azure is the #2 overall Cloud provider, but, as expected, it's the best choice for most Microsoft/Windows-based solutions. That said, it does offer many types of Linux VMs, with quite similar abilities as AWS/GCP.
The Azure pricing is at least as complex as AWS/GCP, plus the pricing tool seems worse. This year, Ampere Altra ARM instances were added. The AMD Milan type is the only one that cannot be configured with 2GB/CPU - so it is priced at twice the RAM. Also, all three F2s_v2 machines I created came with Cascade Lake CPUs - however the documentation specifies Skylake to Ice Lake may be provided.
Unlike last year when everything seemed to go well (although with an arguably "clunkier" dashboard than rivals AWS/GCP), this year, I had issues connecting or keeping ssh connections open to all the Linux VMs on my first day of testing (connection to xxx closed
). As my connections to other providers were rock solid and the Azure status page was clear, it was quite unexpected and annoying, however things seemed fine from the next day on, so it could have been a strange temporary glitch.
Alibaba Elastic Compute Service (ECS)
Instance Type | CPU type | CPU GHz/ RAM/SSD | Price $/Month | 1Y Res. $/Month |
---|---|---|---|---|
ecs.n4.large | Intel Skylake | 2.5/4/30 | 31.25 | 26.56 |
ecs.c6.large | Intel Cascade L | 2.5/4/30 | 41.81 | 35.54 |
ecs.hfc6.large | Intel Cascade L | 3.1/4/30 | 47.07 | 40.01 |
ecs.c7a.large | AMD Milan | 2.55/4/30 | 40.12 | 34.10 |
ecs.c7.large | Intel Ice Lake | 2.7/4/30 | 47.07 | 40.01 |
Alibaba Cloud is the largest Cloud provider in China and most of their services seemed modelled after AWS. I looked at their EC2 equivalent, called ECS.
Compared to last year, Milan and Ice Lake instances became available to US/EU, although ARM is still only in Asia. China regions introduced instances powered by the in-house ARM-based YiTian 710 CPU, but you need to send documentation/id to be able to try them - and their connectivity to the rest of the world is limited, so I did not bother.
All prices listed above are for US (Virginia), except the c7a which is available in Frankfurt. The 1-year reserved instances have to be prepaid.
Oracle Compute VM
Instance Type | CPU type | CPU GHz/ RAM/SSD | Price $/Month |
---|---|---|---|
Standard.A1 | Ampere Altra | 3.0/4/50 | 21.01 |
Standard3 | Intel Ice Lake | 2.6/4/50 | 36.35 |
Standard.E2 | AMD Naples | 2.0/4/50 | 25.19 |
I had never tried the Oracle Cloud Infrastructure (OCI) before and I was pleasantly surprised by their "Free Tier". The registration process is a bit draconian (I got shot down a couple of times) - I assume to avoid abuse, but you get A1 type ARM VM credits equivalent to sustained 4x vCPU, 24GB RAM for free forever (plus a couple of micro AMD E2s). This is extremely generous compared to any other service, and lowers your overall costs if you use ARM instances (you pay any difference over the free credits). But even for the paid ARM instances, their pricing seems kind of insanely good: it is less than half the price of the Ampere Altra solutions of other providers when it comes to on-demand pricing!
Another major reason to look at Oracle Cloud otherwise would be for the Databases and it's good to see that even the Free Tier gives you access to two Autonomous Databases.
I did have a couple of issues otherwise. I tried registering for a US region free Tier account, but after getting rejected twice (after successful payment verification - you get rejected at the end with no explanation), I got accepted for the London region. Over a period of a month I was trying to provision the faster AMD Rome E3 instances which were showing as available. I would go through the configuration and at the very end I would get an error about no capacity, with a suggestion to try a different zone (which I did every time to no avail). The fastest AMD Milan E4 instances were not even theoretically available at all in the region, so I just tried the Naples E2 for the AMD side.
The second issue I had was with the web console, on Chrome leaving a tab would log out and logging back-in would take me to either the homepage or a random "apps" page - which is very annoying if you leave some monitoring open to return to. Safari was even worse, there seemed to be a memory leak which would use up all my RAM if I left a tab open overnight. That is even though the tab would just log out after a while automatically - I've never seen anything like it before. However, I am giving it a quick test now as I am writing the report and these issues, which plagued me for over a month of testing, may have been fixed.
Oracle Cloud's prices are the same across all regions, which is nice. They do not offer any reserved discounts - but we'll see how much that is needed given their low on demand prices.
Tencent Cloud Virtual Machine (CVM)
Instance Type | CPU type | CPU GHz/ RAM/SSD | Price $/Month | 1Y Res. $/Month |
---|---|---|---|---|
S5.MEDIUM4 (C) | Intel Cascade L | 2.5/4/65 | 45.60 | 29.52 |
C3.MEDIUM4 (S) | Intel Skylake | 3.2/4/65 | 59.70 | 44.64 |
SA2.MEDIUM4 (R) | AMD Rome | 2.6/4/65 | 38.40 |
The Tencent Cloud is among the top-3 Chinese cloud providers. Like Alibaba Cloud, they have more options for their Chinese regions and last year I gave those a try. This time, I am only testing what is available in their US (Virginia) region - except SA2, which is in Frankfurt.
Apart from machine type availability, the prices vary per region as well. There is annual reserved instance pricing with or without prepayment (prices above are without prepayment), although it was not available for the SA2 instances. The prices listed above include the minimum bandwidth I could select on their pricing calculator (1GB/day). I could have done the calculation to deduct it myself, but at $0.07/GB it's one of the highest across all providers so I just left that extra $2.1 minimum which their own calculator adds. The disk pricing has quite a large granularity, so you seem to pay the same for 65GB as you do for 30GB.
Now, there is a particularity I found with the Skylake-powered C3 Compute-optimized instances which makes them feel different from what the other clouds call "compute-optimized". First of all, while they are the fastest of all providers for the specific CPU family, their performance varies significantly. In addition, only on the Virginia region, there is a 2-core version C3.MEDIUM4 (4-core is the min on other regions), which sometimes seems to give 2 full cores instead of threads - a bit similar to the shared instances of other providers. This behaviour is not consistent, so I used results from an instance that showed the usual 2 cores = 2 Hyper-Threads behaviour.
DigitalOcean Droplets
Instance Type | CPU type | CPU GHz/ RAM/SSD | Price $/Month |
---|---|---|---|
Basic-2 | Intel | 2.3/4/80 | 24.00 |
Premium-2 | Intel Cascade L | 2.5/4/80 | 28.00 |
Premium-2-AMD | AMD Rome | 2.0/4/80 | 28.00 |
CPU-opt-2 (S) | Intel Skylake | 2.7/4/25 | 42.00 |
CPU-opt-2 (C) | Intel Cascade L | 2.7/4/25 | 42.00 |
CPU-opt-2 (I) | Intel Ice Lake | 2.6/4/25 | 42.00 |
DigitalOcean was close to the top of the perf/value charts last year, with their shared CPU Basic "droplets" providing the best value. I am actually using DigitalOcean droplets to help out by hosting a free weather service called 7Timer, so feel free to use my affiliate link to sign up and get $200 free - you will help with the free project's hosting costs if you end up using the service beyond the free period. Apart from value, I chose them for the simplicity of setup, deployment, snapshots, backups.
I do love how simple, region-independent and stable their pricing structure is (especially after going through some really complicated calculators on other providers), although they did increase their prices by 20%, after years of them being unchanged. Another drawback compared to last year is that on this round of testing, their best value shared-CPU VMs were giving me lower performance on average, which means they might have a higher usage rate for those VM types currently. We'll see how that translates in the benchmarks, but I am afraid DigitalOcean might not have been upgrading their "Basic" instance fleet enough compared to their growth.
Akamai (Linode)
Instance Type | CPU type | CPU GHz/ RAM/SSD | Price $/Month |
---|---|---|---|
Linode 4GB (N) | AMD Naples | 2.2/4/80 | 24.00 |
Linode 4GB (R) | AMD Rome | 2.9/4/80 | 24.00 |
Linode 4GB (M) | AMD Milan | 2.0/4/80 | 24.00 |
Dedicated 4GB (N) | AMD Naples | 2.0/4/80 | 36.00 |
Dedicated 4GB (R) | AMD Rome | 2.9/4/80 | 36.00 |
Dedicated 4GB (M) | AMD Milan | 2.0/4/80 | 36.00 |
Linode, the 20-year old cloud provider who's (not easy to provision) AMD Rome instances topped the value charts in the last comparison, was bought by Akamai, well known for their CDN.
It's still early in Linode's integration into Akamai, but I have already seen significant changes: A 20% price increase sort of followed DigitalOcean's lead, however they finally introduced AMD Milan instances to their fleet. You still can't choose a CPU, but while last year 9 out of 10 of provisioned VMs would have been slow Naples, for many regions during my testing this year, this was no longer the case. While there still seems to be quite a stock of Naples in the Atlanta region (and perhaps Newark) the majority of the VMs I built had an AMD Milan or Rome. The catch is that their AMD Milan is clocked much lower than Rome (so the latter is still preferred, see benchmarks), but at least you rarely get stuck with an old slow Naples (again, depending on the region).
Otherwise, Akamai/Linode is still easy to setup and maintain and has a simple backup option - I like it about as much as DigitalOcean in that respect. And like DigitalOcean its simple pricing structure is region independent.
I'll try to refer to the company Akamai from now on and keep Linode just for the shared-CPU VM type.
IBM Cloud Compute
Instance Type | CPU type | CPU GHz/ RAM/SSD | Price $/Month | 1Y Res. $/Month | 3Y Res. $/Month |
---|---|---|---|---|---|
cx2-2x4 | Intel Skylake | 2.6/4/100 | 66.89 | ||
B1.2x4 | Intel Broadwell | 2.1/4/25 | 58.52 | 38.14 | 23.25 |
cz2-2x4 | IBM z15 | 4.5/4/100 | 142.26 |
The IBM Cloud offers some compute options that the major cloud providers might not (e.g. bare metal servers) and other unique services (e.g. managed Db2), however I am only testing their Virtual Server options (both the "VS for VPC" and "VS for Classic" variants). There were only 3 major types of machines available that I could see - although there are many more options to be configured as bare metal servers.
I found that their console and price estimator are more convoluted than necessary. It's also disappointing that only their "Classic" cloud offers reserved discounts. However, big businesses that have software written for System/360 mainframes (perhaps even from all they way back in the 60s), will be pleased to learn that apart from x86, there are z/Architecture virtual servers. Although compiled benchmarks like GeekBench can't run on them, I did try out the Perl benchmark to see how it performs on the 4.5GHz IBM z15 cores.
Finally, I got to use both their online chat and email support - they were both fast to respond and effective.
OVHcloud
Instance Type | CPU type | CPU GHz/ RAM/SSD | Price $/Month | 1M Res. $/Month |
---|---|---|---|---|
D2-4 | Intel Haswell | 2.0/4/50 | 17.3 | 13.15 |
S1-8 | Intel Haswell | 2.1/8/40 | 18.87 | 14.66 |
B2-7 | Intel Haswell | 2.4/7/50 | 34.96 | 29.04 |
C2-7 | Intel Haswell | 2.7/7/50 | 50.56 | 42.24 |
The French OVHcloud seems to have quite some fans, and I was surprised to see how low the cost of their S1/D2 Public Cloud offerings are. Sure, they are just Haswell shared CPU, but at these prices the performance/price may be quite good. Their prices seem to go down some more if you select Euro and $CAD, which I found a bit odd, but I kept the currency set to USD to match every other provider. Do check it out if you can pay in a different currency without significant fees. There is also discounted reserved pricing for a 1-month reservation.
After my tests, I actually forgot a monthly reservation active, which I deleted on the first day of the subsequent month, just after I got the invoice. I opened a support ticket, politely asking if they can refund (fully or partially) that month's reservation since I deleted it just a few hours into the cycle. From my experience, Amazon and Google would have most likely refunded in such a situation. OVHcloud took 14 days to respond to my ticket, apologizing for the delay and declining, saying it is a non-refundable service (yeah, I know, I was just asking for a courtesy refund). I would definitely not call that good customer service. Amusingly, I received an email to rate their customer service response, I tried to click on a low grade and got a broken page stating Ce questionnaire n'est pas accessible actuellement.
...
Test setup
Since I followed the same methodology as last year, I will go over it very briefly, but you can still read more details here.
Almost all the tested instances were on 64bit Debian 11 (for a couple I had to settle for Ubuntu 20.04 LTS), with the following software:
- DKBench Perl benchmark
> wget https://github.com/dkechag/perl_benchmark/archive/refs/heads/master.zip
> unzip master.zip
> cd perl_benchmark-master
> ./setup.pl
> ./dkbench.pl
My own benchmark suite - DKBench - was run over a dozen times on each instance over various hours/days to get fastest/slowest run-times.
The least "real-world" part of this suite, the prime benchmark, is the most CPU-scalable, so it was used to calculate the maximum scalability on the 2 cores by getting the single and dual-thread benchmark times:
> ./prime_threads.pl
> ./prime_threads.pl -t 1
- Geekbench 5
> wget https://cdn.geekbench.com/Geekbench-5.4.4-Linux.tar.gz
> tar xvfz Geekbench-5.4.4-Linux.tar.gz
> Geekbench-5.4.4-Linux/geekbench5
I simply kept the best of 2 runs, you can browse the results here. There's an Arm version too at https://cdn.geekbench.com/Geekbench-5.4.0-LinuxARMPreview.tar.gz.
- Perl 5.32.1 compilation (perlbrew)
> \curl -L https://install.perlbrew.pl | bash
> source ~/perl5/perlbrew/etc/bashrc
> perlbrew download perl-5.32.1
> time perlbrew install perl-5.32.1
Compiles Perl from sources and runs the default tests (single-threaded).
Results
The raw results can be accessed on this spreadsheet (or here for the full Geekbench results).
In the graphs that follow, the y-axis lists the names of the instances, with the CPU type in parenthesis:
(SR) = Intel Sapphire Rapids
(I) = Intel Ice Lake/Cooper Lake
(C) = Intel Cascade Lake
(S) = Intel Skylake
(B) = Intel Broadwell
(H) = Intel Haswell
(M) = AMD Milan
(R) = AMD Rome
(N) = AMD Naples
(G3) = Amazon Graviton3
(G2) = Amazon Graviton2
(A) = Ampere Altra
(Z) = IBM z
= Unspecified Intel (Broadwell/Skylake)
Single-thread Performance
Single-thread performance can be crucial for many workloads. If you have highly parallelizable tasks you can add more vCPUs to your deployment, but there are many common types of tasks where that is not always a solution. For example, we can scale our Sphinx search database into enough vCPUs so that each request is served without any waiting in a queue, however the vCPU's speed determines the actual response time of the request. Many databases and web apps work the same way - even if you have enough cores to serve all concurrent web or db requests, the actual response time is still limited by the maximum single-thread performance.
- Perl performance
We start with the main Perl benchmark on a single thread:
Google tops the performance table, with the new Intel Sapphire Rapids c3 instances, followed very closely by their "compute-optimized" Milan c2d variant (they seem to have sorted out the performance issues that I noted last year). This tells me that when Google releases their next gen AMD EPYC Genoa instances, they should be comfortably faster in this benchmark. I base my estimate on published Milan vs Genoa benchmarks - we'll just have to wait until some time later this year to verify.
Following those two in performance, we have a string of Intel Ice Lake and more AMD Milan instances from other providers, ending with Amazon's Graviton3, which manages to be within 20% of the top. After seeing how Graviton2 was a great value last year, I was hoping Graviton3 would get top tier performance, and it does not disappoint.
The second tier of performance (within 50% of the top) includes mostly Intel Cascade Lake and AMD Rome VMs, with an impressive Skylake (C3) from Tencent thrown in, and closes with Google's and Oracle's Altair Altra VMs.
At the absolute bottom you see OVH's Haswell-powered D2 and S1, as well as Akamai's AMD Naples variants (fortunately now easier to avoid). Oracle's Naples E2 and IBM's Broadwell B1 complete the sub 200-second benchmark group, i.e. delivering best case performance that is less than half of that of the top performers.
- Perlbrew compilation
Compiling perl is not a 100% CPU task, it is affected by I/O performance as well, so some CPU differences are less pronounced, but overall we don't get a vastly different picture:
On top we find Google's c2d (Milan) and Microsoft's Dls_v5 (Ice Lake) followed by other machines with those CPUs - plus the Sapphire Rapids which is within 1-2% of the top. Oh, and that Tencent Skylake compute-optimized C3 is still close to the top.
The only type that manages to take more than twice the time of the top performer is OVH's hopelessly slow D2, although the Naples types from Akamai and Oracle are not much faster.
- Geekbench 5 Single Core
Even though Geekbench is a synthetic benchmark that is farther away from the workloads that interest me and I mainly use it for quick comparisons, it can be quite useful due to the fact that results are available online for a multitude of systems. It's good to see that it generally agrees with my other performance tests - the top performance group includes Milan and Ice Lake VMs from various providers with Google's c2d once more at #1, with the Sapphire Rapids at #4. Amazon's Graviton3-powered c7g is following this group.
And, again, Haswell and Naples systems are at the bottom, with OVH's inexpensive D2 scoring under 500, or 3 times lower than the fastest system.
Multi-thread Performance & Scalability
- Scalability
A graph follows with the calculated maximum CPU scalability over the 2 vCPUs (shown in magenta). The Geekbench scalability is also plotted, which is a more average scalability figure and is usually lower.
100% scalability means that if you run 2 parallel threads, they will both run at 100% speed compared to how they would run in isolation. For systems where each vCPU is 1 core (e.g. all ARM systems), or for "shared" CPU systems where each vCPU is a thread among a shared pool, you should expect scalability near 100% - what is running on one vCPU should not affect the other when it comes to CPU-only workloads.
Most Intel/AMD systems though give you a single core with 2x Hyper-Threads as a 2x vCPU unit. Those will give you scalability well below 100%. A 50% scalability would mean you have the equivalent of 1x vCPU, so the farther up you are from 50%, the more performance your 2x vCPUs give you over running on a single vCPU.
As expected, the ARM instances (Graviton and Altra), as well as Google's Milan t2d and the IBM z15 are near the 100% mark as they give you a full core per vCPU. The shared cpu instances from Akamai, DigitalOcean, OVH, Alibaba are also well over 90% - the performance of each of your threads is more dependent on what workloads the rest of the shared cluster (i.e. other customers) is running at the time rather than your own 2 threads.
The AMD non-shared instances seem to be mostly around the 65%-70% mark, which is a bit like like getting 1.35 vCPU for each 2x you buy. Intel fares quite a bit worse, staying at around 60-65% (i.e. getting 1.25 vCPU for each 2x unit). This includes even their latest Sapphire Rapids and is a bit disappointing given the fact that Intel invented Hyper-Threading. Note that this does not seem to be a fluke of the benchmark used, as the Geekbench scalability, while being consistently lower, shows the same trend.
There are some instances in between the above 2 groups with scalability at the 70%-85% range like Akamai's Dedicated, Tencent's E2/SA2, Oracle's E2 and even Amazon's c6a. I have to admit, I am not sure what is going on with those, in theory they should not be shared CPUs, so they should be behaving like the second group. They are not particularly interesting compared to other types, so I didn't spend time trying to figure this one out.
- Multi-thread Performance
Applying the maximum scalability to the DKBench performance we can get an (optimistic) extrapolation of the multi-threaded performance of our instances (sorted by the mean of fastest/slowest DKBench run):
The t2d from Google is the obvious winner, as it provides you with pretty much the fastest core (AMD Milan) at a configuration of a full core per vCPU. However, Amazon's Graviton3 is surprisingly close.
The z15 and Altra instances are quite a step below those two and they just barely manage to place above the fastest of the rest of the Milan and the single Sapphire Rapids.
At the bottom we have the usual suspects: Akamai Naples, OVH s1/D2 Haswell and Broadwell from IBM and Google.
We can see a roughly similar picture on Geekbench 5:
The only surprise is how well OVH's C2 Haswell does here, it must be operating at a significantly higher power envelope than the rest of the field (it's an older, less efficient generation). But in general, the same comments apply as above.
Performance / Price (On Demand / Pay As You Go)
Let's move on to what I consider the more important graphs showing the performance / price ratio. If you wanted the absolute fastest performance at any cost, it is quite likely you would not be using a public cloud offering in the first place - so cost should be a significant concern for most public cloud decisions.
I will start with the "on-demand" price quoted by every provider (including any sustained use discounts). I listed monthly costs above, but these prices are charged per minute or hour, so you don't need to reserve for the month. Note that some providers also have spot/preemptible instances at deep discounts - I looked at those in a later post here.
- Perl single-thread performance/price
The first chart is for single threaded performance/price. Even if you want to exclusively run a single thread, for most of these instance types you still have to buy a 2x vCPU unit anyway.
Oracle's Ampere A1 easily dominates overall (and is actually one of the few that comes in 1x vCPU configurations as well), while their Ice Lake is the best value among the top performing CPUs. Akamai is the other highlight with their shared-CPU Linode 4G Rome & Milan.
Behind the top value group lie the low cost types from DigitalOcean and OVH: they are slow - with very variable performance - but also cheap. From the 3 major cloud providers, Google's n2d manages the best value.
IBM comes dead last, their z/arch type is excused - it's for very special use cases - but their x86 ones are not, they are just poor value.
- Multi-thread performance/price
Looking at the multi-thread performance/price we get very similar results from DKBench and Geekbench 5:
Due to the Altra's scalability, the Oracle A1 value further sky-rockets, it's simply an amazing value among on-demand VM instances.
As before, low cost shared-CPU types follow, first from Akamai, further back from DigitalOcean and OVH.
From the major providers, this time it is AWS with its Graviton3 instances leading, with Google's t2d further back (having twice the RAM though).
IBM is again last, but also anything Intel (or even AMD that is not Milan) from the "big 3" providers offers generally poor value.
Performance / Price (1-Year reserved)
Several providers offer significant 1-year reserved discounts: AWS, GCP (except the ARM t2a), Azure, Alibaba, Tencent (except SA2), IBM (B1 only). OVH offers 1 Month reserved discounts. For long-term instances these could change the value proposition of the various solutions significantly. Also, for GCP and AWS you could actually apply the 1 year prices to most on-demand instances by using (free) third party services like DoIT's Flexsave, so it's quite important to investigate the performance/price ratios including these discounts:
- Perl single-thread performance/price
Oracle's A1 still leads, with OVH catching up to Akamai. However, with the 1Y discount, GCP's and Alibaba's Milan types are fast enough to give a similar performance per price, despite their significantly higher price.
- Multi-thread performance/price
The slowest OVH instances catch up with Akamai once more, however they are all still far away from the value that the much faster Oracle A1 provides.
The Amazon Graviton3 c7g VMs also shine here, but, for me, the other highlight (apart from Oracle) is GCP's t2d. Remember, this type offers the highest multi-thread performance (you get 1 full Milan core per vCPU) and it is configured here with twice the RAM compared to almost everything else, yet it still makes the top-10 in performance/cost! What's crazy is that it is currently at exactly the same price for reserved instances as the n2d (1 Milan Hyper-Thread per vCPU) - if you configure the latter with the same RAM. The same price, when you are getting twice the actual CPU hardware.
Remember, if you use Flexsave like we do at SpareRoom, those great t2d and c7g prices could apply to on-demand instances as well.
Performance / Price (3-Year reserved)
- Perl single-thread performance/price
Finally, for very long term commitments, AWS, GCP, Azure (and IBM for their B1 only) offer 3 year reserved discounts. We plot these against the best prices of competitors (1 year, 1 month or on-demand, whichever lowest):
Azure's Ice Lake, along with their and Google's Milan top the value chart. At the 3 year discount their price is low enough that their top-tier single thread performance drives them ahead of the (still cheaper) Oracle A1 and Amazon Graviton3.
- Multi-thread performance/price
Impressively, the Oracle A1 is still ahead. The Amazon Graviton3 type and Azure's own Altra are just finally catching up with the 3-year commitment, and the t2d is right behind them. Worth repeating that Oracle gives you your first 4vCPUs and 24GB for free, so there is an extra discount I cannot show you here (it might even be a 100% discount if you only need up to 4 vCPUs!), while the t2d is the fastest overall and is priced with twice the RAM.
Conclusions
All the data is provided so you can draw your own conclusions, but I'll share mine - some are obvious, some are more subjective. I'll start with the 2 things that impressed me the most:
- Oracle's ARM A1 offers the best (often by far) "bang for buck" for almost all scenarios. And there's even a discount on top that I can't include in the comparison (first 4x vCPU & 24GB free). I will not further comment on how surreal is the fact that Oracle has the best priced solution.
- The only scenario where the A1 does not lead (apart from specifically requiring x86) is when you need maximum per-thread performance. That's Google's t2d domain. It's not obvious from the chart as it can only be configured with 4GB/vCPU (vs the 2GB for most other types), but for reserved instances it is the same price as Google's own "regular" Milan instances while providing twice the actual hardware (you get cores instead of Hyper-Treads).
I'll further comment with my picks for various usage scenarios:
- Best overall value (performance/price): That's the Oracle A1 as mentioned above. You don't even have to reserve to get the best value.
- Best overall value x86: If you can't do ARM, Akamai/Linode is your best bet. Just make sure when you spin up an instance you don't get a "dud" (i.e. Naples).
- Best value for top-tier performance: If you can reserve for at least 1 year, that would be Google's t2d. For on-demand pricing, Oracle's Standard3 is probably your best bet, unless the slightly lower performance of Amazon's c7g is enough.
- Budget solution: The OVH D2 and S1 are the cheapest/vCPU solutions and, especially with 1 month reservations, they would seem to provide good value. However, they are so slow that you can actually go cheaper and faster at the same time with another provider: an Oracle A1 with 1xvCPU and 4GB RAM is just $12.72/month, which is less than even the monthly reservations from OVH. Such an instance will provide you about TWICE the single-threaded performance and at least as good multi-threaded performance. Plus you can get 4 of those instances for free, what's more budget friendly than that?
- Maximum performance: Assuming you don't care about price, Google's c2d seems to marginally beat their own c3 in 2 out of my 3 single threaded tests, so it is between those two for the maximum performance (on a free core) crown. However, for multi-threaded workloads to keep this speed, you'd need twice the cores compared to the slightly slower-clocked but not Hyper-Threaded t2d.
Finally, I'll make some comments per provider tested in the order I introduced them:
- Amazon: Love their Graviton3 - it's a faster ARM alternative to the commonly used Altair Altra and it's even cheaper compared to the latter on all other providers apart from Oracle. Otherwise AWS is quite costly, so you'd better reserve if you have long-running instances, or sign up for Flexsave or similar.
-
Google: After sorting last year's issues, their Milan instances are either the fastest or best value, depending on the type, among the large cloud providers. The reserved prices for the Tau variants especially make them an amazing value for those who need significant processing power. Reserving is not the only way to get the lower prices, as I mentioned services like Flexsave. Be careful, for certain instance types you have to specifically choose the
min_cpu_platform
, otherwise e.g. n2d can give you a slower Rome and n2 can give you a slower Cascade Lake. Also, I'd avoid n1 instances - with sustained use or reservations, other instance types either become cheaper (e2) or reach a similar price with significantly increased speed. - Microsoft: For Linux hosting they don't seem to offer a better value than even the pricey Amazon (who even have a good value proposition with their Graviton3), so price would not be a reason to choose Azure. Most of their solutions are obviously geared towards Windows anyway (with temp storage on the Hypervisor for Windows's pagefile.sys etc.), so it is good that at least you have some decent options to run your Linux instances presumably along some other Windows solutions which were the reason to choose Azure.
- Alibaba: Their main cloud offerings and lowest prices are in China, however their c7a and n4 are a decent value for some scenarios when compared to the above 3 providers. That said, not sure why I would not use better value solutions like Oracle or Akamai.
- Oracle: Hard to believe that Oracle provides the cheapest Altra, but they do and it's an unbeatable value. It is quite annoying that I could not change region or try their modern AMD instances E3 and E4. Especially with E3 it was ridiculous to see it as an option to configure and at the very end when you try to provision one you get denied. Every time. Dozens of attempts over the course of over a month. But they are the ultimate budget provider currently, so feel I'd have to forgive that.
- Tencent: Like Alibaba, their best solutions are in China. They did add 1 year reservations compared to my last tests with them, but those are not enough to compete in the west for value with other providers. Their console seems a bit better than Alibaba's at least.
- DigitalOcean: I still love their easy to use web console and simple pricing, but they dropped in performance compared to last year (with increased prices too), and with Akamai/Linode improving they seem to now lag behind. Not far behind, but their VM fleet needs some upgrades to keep up. Like I noted last year, their CPU-optimized instances are not a good value, if you care about that, stay with their Basic or Premium shared-CPU types.
- Akamai: I am not sure if the Milan upgrades were before or after the purchase by Akamai, but it solved (for most regions) the problem of getting stuck with slow Naples VMs that made me reluctant to propose them as the best solution last year. The interface is changed a bit, but is still as easy to use. I would still so much prefer to be able to choose the CPU though, even if it meant paying something extra, and it's time they got rid of Naples completely. As I noted last year, the Dedicated types are not nearly as good value and their performance is not as stable as you'd expect for "dedicated" CPUs.
- IBM: By far the lowest performance/price, the IBM cloud VMs for me seem like they are for two types of scenarios: old s390x software that can only run on the z15 instances or "nobody ever got fired for buying IBM" company culture. I am only half-joking, as I can see that the IBM cloud does have some other unique offerings, but they sure could offer some competitive pricing for their x86 VMs. It was interesting though to try my own benchmark on the z15 - a 4.5GHz implementation of the old s390x mainframe architecture - it was close in performance to the 3GHz ARM Altair Altra.
- OVHcloud: I've heard many people recommend OVH, which is probably why I expected more. From what I can tell, unless I missed something vital, all of their Public Cloud VM types are based on the 10-year old Haswell architecture. They have some very cheap options, but they are so slow that you could effectively get half the vCPUs from the likes of Oracle at a lower overall price than OVH and still come ahead in performance. Then, they have the high-clocked C2 that can actually compete with much more modern Cascade Lakes, which seems like an impressive feat. However, those do not come cheap at all, probably because a Haswell is so inefficient compared to these newer CPUs it competes with, that the running costs must be significantly more for OVH. Lastly, the a 14 day support ticket response time I had would probably be enough to turn me off by itself.
Hopefully the comparison gives you a good idea of what processing power you can get for your money for the various options on each cloud provider. If your workloads are very specialized, you will probably want to run them as benchmarks yourself instead of relying on my methodology which suits general computing, web services etc. Also don't forget that you will have to factor-in network costs, changing prices, special requirements about region, RAM, storage and anything else that varies between providers or is not available from some.
Top comments (9)
Thanks for compiling the huge list.
Would be great if you could include Hetzner Cloud (hetzner.com/cloud) next time.
I will consider it for next year. Hetzner is a bit lower tier (in reputation and otherwise) and there are some questions about oversubscribing, no support etc, but it would be interesting to test them out and see what you actually get in comparison.
Hetzner seems to be even cheaper than Oracle (ARM), but I suspect qulaity is so so?...
Damm bro. Huge effort. ❤️
As others have said, thank you so much for taking the time to perform the tests and do the writeup! 🤩
This is excellent! Many thanks for article. Currently I stick with Oracle (which is really best buy), but just for safety I need to check next 2-3 guys, because I heard that Oracle sometimes ban your PAYG account
All the complaints about Oracle bans that I've seen involved things like VPNs etc which are against their terms. So if you are not doing anything dodgy/abusive, you should be fine.
Sure, I'm doing absolutely legal AI video generation (even with content moderation) so all should be fine
Great summary, thank you so much! To be honest Oracle cloud wasn't event on my radar, but that's about to change.
I am a bit less clear on how running apps on kubernetes applies to the reserved instance types. Perhaps that's a topic for another discussion....