The final goal of the previous posts was the following rainbow:
I got it from the "Cost Analysis" for my "yugabytedb" compartment:
The cost per hour, at the time where I had a 9 pods, running YugabyteDB database, and taking 1000 inserts per second, plus 900 threads reading, was CHF 2.81 per hour. This is 3 US dollars, about USD 85 per month.
Half of it is for compute (my 3 workers with 16 vCPU, 128GB RAM, 8 Gbps network). The other half is the storage (I provisioned 1TB per POD with 25000 IOPS and 480 MB/s throughput)
You can compare with other cloud vendors, the Oracle Cloud is not expensive. And there's even better: I have distributed database across multiple Availability Domains (the equivalent of AZs) but no network costs visible here. Because inter-region taffic is free in the Oracle Cloud. This is great, especially for a distributed database.
Many people do not trust Oracle, given the past commercial practices, and nobody knows if this pricing will stay of is there only to try to get its part of the cloud market. But with YugabyteDB, no worry. You can add nodes in another cloud provider and then migrate without application downtime.
Finally I scaled from 3 to 18 workers, for CHF 360 per day:
It is important to understand the cost of it. Here is the detail for 1 hour of it, so CHF 15:
lineItem/intervalUsageStart | lineItem/intervalUsageEnd | product/service | product/Description | cost/productSku | cost/currencyCode | cost/skuUnitDescription | SUM of usage/billedQuantity | SUM of cost/myCost |
---|---|---|---|---|---|---|---|---|
2022-04-10T16:00Z | 2022-04-10T17:00Z | BLOCK_STORAGE | Block Volume - Performance Units | B91961 | CHF | GB Months | 1844.301075 | 3.135311828 |
2022-04-10T16:00Z | 2022-04-10T17:00Z | BLOCK_STORAGE | Block Volume - Storage | B91962 | CHF | GB Months | 184.4301075 | 4.739853763 |
2022-04-10T16:00Z | 2022-04-10T17:00Z | COMPUTE | Standard - E3 | B92306 | CHF | OCPU Hours | 144 | 3.6288 |
2022-04-10T16:00Z | 2022-04-10T17:00Z | COMPUTE | Standard - E3 - Memory | B92307 | CHF | GB Hours | 2304 | 3.456 |
2022-04-10T16:00Z | 2022-04-10T17:00Z | NETWORK | Outbound Data Transfer Zone 1 | B88327 | CHF | GB Months | 0.01172930188 | 0 |
2022-04-10T16:00Z | 2022-04-10T17:00Z | ORALB | Oracle Bare Metal Cloud - 100 Mbps Load Balancer | B88319 | CHF | LB Hours | 2 | 0.03862 |
Grand Total | 14.99858559 |
CPU is 244 units (18 workers x 18 OCPU): CHF 3.6288 per hour for 36 vCPU. RAM is 2304 units (18 workers x 128 GB). Storage is over-provisioned here (I have 134 volumes). I have two load balancers.
And the outbound data transfer is free. Well, the first First 10 TB / Month are free. Let's transfer some data between the instances:
l=$(kubectl get nodes -o wide | awk '/[.]/{print $7}') ; for i in $l ; do for j in $l ; do ssh -o StrictHostKeyChecking=no $i "ssh -o StrictHostKeyChecking=no $j 'cat /dev/urandom ' > /dev/null" & done ; done
And I checked the amount to transfer from each VM (twi third of them are cross-AD):
[opc@C oke]$ for i in $(kubectl get nodes -o wide | awk '/[.]/{print $7}')
do
ssh $i /usr/sbin/ifconfig ens3
done | grep "TX packets"
TX packets 8478440884 bytes 7955254757560 (7.2 TiB)
TX packets 6526564712 bytes 4309041230581 (3.9 TiB)
TX packets 7209189715 bytes 4520487334686 (4.1 TiB)
TX packets 4893032052 bytes 3874819821118 (3.5 TiB)
TX packets 4840134414 bytes 3610673202510 (3.2 TiB)
TX packets 5590458325 bytes 3916130768422 (3.5 TiB)
TX packets 5028607494 bytes 3862491163175 (3.5 TiB)
TX packets 8620319573 bytes 7966947093007 (7.2 TiB)
TX packets 2543507911 bytes 2608511625449 (2.3 TiB)
TX packets 6502391358 bytes 4292787556529 (3.9 TiB)
TX packets 5623769479 bytes 3912518736119 (3.5 TiB)
TX packets 5497755851 bytes 3830829287170 (3.4 TiB)
TX packets 7394518139 bytes 4703877357774 (4.2 TiB)
TX packets 3288643325 bytes 2944649394662 (2.6 TiB)
TX packets 5861841546 bytes 3972967480991 (3.6 TiB)
TX packets 8581590561 bytes 8730487409398 (7.9 TiB)
TX packets 5544891680 bytes 3778050574980 (3.4 TiB)
TX packets 3900377548 bytes 3439890071890 (3.1 TiB)
Just to be sure that I used more than some minimums (like the 10TB free for region outbound).
Graphs are not only about cost.
It is important to check the compute usage:
These are my 3 workers (I've just scaled the node pool at the end, to 18 workers).
Top comments (0)