DEV Community

Franck Pachot for YugabyteDB

Posted on • Updated on

YugabyteDB on OKE: testing batch size for bulk load

Here is a short blog post where I'm testing different insert batch size on YugabyteDB. I've run this on Oracle Cloud Kubernetes (OKE) and it is the occasion to show how easy it is to setup, run, and estimate costs. I've created a cluster with 3 workers with the following compute shape:

Shape configuration<br>
Shape: VM.Standard.E3.Flex<br>
OCPU count: 15<br>
Network bandwidth (Gbps): 15<br>
Memory (GB): 240<br>
Local disk: Block storage only

Here are the nodes:

cloud@cloudshell:~ (eu-frankfurt-1)$ kubectl get nodes -o wide

NAME          STATUS   ROLES   AGE   VERSION   INTERNAL-IP   EXTERNAL-IP     OS-IMAGE                  KERNEL-VERSION                      CONTAINER-RUNTIME   Ready    node    11m   v1.24.1    Oracle Linux Server 7.9   5.4.17-2136.311.6.1.el7uek.x86_64   cri-o://1.24.1-76.el7   Ready    node    11m   v1.24.1   Oracle Linux Server 7.9   5.4.17-2136.311.6.1.el7uek.x86_64   cri-o://1.24.1-76.el7   Ready    node    11m   v1.24.1   Oracle Linux Server 7.9   5.4.17-2136.311.6.1.el7uek.x86_64   cri-o://1.24.1-76.el7
cloud@cloudshell:~ (eu-frankfurt-1)$ 
Enter fullscreen mode Exit fullscreen mode

I create a namespace for YugabyteDB cluster and install it from the Helm chart with the following values:

kubectl create namespace yb-demo 

cat > overrides.yaml <<'YAML'
  ephemeral: false  # will not allocate PVs when true
    count: 2
    size: 100Gi
    storageClass: "oci-bv"
    count: 2
    size: 100Gi
    storageClass: "oci-bv"
  master: 3
  tserver: 12
      cpu: 2
      memory: 4Gi
      cpu: 2
      memory: 4Gi
      cpu: 4
      memory: 32Gi
      cpu: 4
      memory: 32Gi
    # those sets the placement info from OCI metadata
    placement_cloud: "$(curl -s"
    placement_region: "$(curl -s"
    placement_zone: "$(curl -s"
    # those sets the placement info from OCI metadata
    placement_cloud: "$(curl -s"
    placement_region: "$(curl -s"
    placement_zone: "$(curl -s"
    # those are not yet default in the version I used
    ysql_enable_packed_row: "true"
    yb_enable_read_committed_isolation: "true"

helm install yb-demo yugabytedb/yugabyte --namespace yb-demo \
                                         --wait -f overrides.yaml

Enter fullscreen mode Exit fullscreen mode

I have 12 yb-tserver pods limited to 4 vCPUs, 32GB RAM and 200DB storage:

cloud@cloudshell:~ (eu-frankfurt-1)$ kubectl get sts -n yb-demo -o wide

NAME         READY   AGE    CONTAINERS              IMAGES
yb-master    3/3     4d8h   yb-master,yb-cleanup    yugabytedb/yugabyte:,yugabytedb/yugabyte:
yb-tserver   12/12   4d8h   yb-tserver,yb-cleanup   yugabytedb/yugabyte:,yugabytedb/yugabyte:

cloud@cloudshell:~ (eu-frankfurt-1)$ kubectl get pod -n yb-demo -o wide

NAME            READY   STATUS    RESTARTS        AGE    IP             NODE          NOMINATED NODE   READINESS GATES
yb-master-0     2/2     Running   0               4d8h   <none>           <none>
yb-master-1     2/2     Running   0               4d8h   <none>           <none>
yb-master-2     2/2     Running   0               4d8h   <none>           <none>
yb-tserver-0    2/2     Running   0               4d8h   <none>           <none>
yb-tserver-1    2/2     Running   0               4d8h   <none>           <none>
yb-tserver-10   2/2     Running   0               4d7h   <none>           <none>
yb-tserver-11   2/2     Running   0               4d7h   <none>           <none>
yb-tserver-2    2/2     Running   0               4d8h   <none>           <none>
yb-tserver-3    2/2     Running   0               4d8h   <none>           <none>
yb-tserver-4    2/2     Running   3 (2d21h ago)   4d8h   <none>           <none>
yb-tserver-5    2/2     Running   0               4d8h   <none>           <none>
yb-tserver-6    2/2     Running   0               4d8h   <none>           <none>
yb-tserver-7    2/2     Running   0               4d8h   <none>           <none>
yb-tserver-8    2/2     Running   0               4d8h   <none>           <none>
yb-tserver-9    2/2     Running   2 (2d21h ago)   4d7h   <none>           <none>

cloud@cloudshell:~ (eu-frankfurt-1)$ kubectl get svc -n yb-demo -o wide

NAME                 TYPE           CLUSTER-IP      EXTERNAL-IP       PORT(S)                                                                      AGE    SELECTOR
yb-master-ui         LoadBalancer    7000:31025/TCP                                                               4d8h   app=yb-master
yb-masters           ClusterIP      None            <none>            7000/TCP,7100/TCP                                                            4d8h   app=yb-master
yb-tserver-service   LoadBalancer   6379:31176/TCP,9042:32346/TCP,5433:31696/TCP                                 4d8h   app=yb-tserver
yb-tservers          ClusterIP      None            <none>            9000/TCP,12000/TCP,11000/TCP,13000/TCP,9100/TCP,6379/TCP,9042/TCP,5433/TCP   4d8h   app=yb-tserver
Enter fullscreen mode Exit fullscreen mode

The yb-tserver-service is the load balancer to connect to the PostgreSQL endpoint (YSQL, on port 5433) or Cassandra-like endpoint (YCQL, on port 9042). The yb-master-ui exposes the Web Console on port 7000. I get the URL with this:

kubectl get svc -n yb-demo \
 --field-selector ""  \
 -o jsonpath='Webconsole:
Enter fullscreen mode Exit fullscreen mode

In this console, you can check that the Cloud / Region / Zone are correctly set (I used the OCI metadata) to set the placement info.

Tablet-Peers by Availability Zone

Then I set the replication factor to 3 with 1 replica in each Availability Domain:

kubectl -n yb-demo exec yb-master-1 -c yb-master -- yb-admin \
 --master_addresses \
yb-master-2.yb-masters.yb-demo.svc.cluster.local:7100 \
 modify_placement_info \,\,\ \
 3 rw
Enter fullscreen mode Exit fullscreen mode

This can also be set at tablespace level for different placement of tables or partitions.

I get the PostgreSQL endpoint parameters with:

PGUSER=yugabyte PGPASSWORD=yugabyte PGDATABASE=yugabyte
PGHOST=$(kubectl get services -n yb-demo --field-selector ""  -o jsonpath='{.items[0].status.loadBalancer.ingress[0].ip}')
PGPORT=$(kubectl get services -n yb-demo --field-selector ""  -o jsonpath='{.items[0].spec.ports[?("tcp-ysql-port")].port}')
set | grep ^PG
Enter fullscreen mode Exit fullscreen mode

This shows the environment variables to set to connect with a simple psql:
cloud@cloudshell:~ (eu-frankfurt-1)$ PGHOST=$(kubectl get services -n yb-demo --field-selector ""  -o jsonpath='{.items[0].status.loadBalancer.ingress[0].ip}')<br>
cloud@cloudshell:~ (eu-frankfurt-1)$ PGPORT=$(kubectl get services -n yb-demo --field-selector ""  -o jsonpath='{.items[0].spec.ports[?("tcp-ysql-port")].port}')<br>
cloud@cloudshell:~ (eu-frankfurt-1)$ PGUSER=yugabyte PGPASSWORD=yugabyte PGDATABASE=yugabyte ; set | grep ^PG<br>
cloud@cloudshell:~ (eu-frankfurt-1)$ <br>

Here are my connection parameters:

Enter fullscreen mode Exit fullscreen mode

They connect though the load balancer, and then each connection can end to a different node:

I'll use my old ybio to insert rows in bulk:

git clone
cd ybio
psql -ef ybio.sql
Enter fullscreen mode Exit fullscreen mode

I run the following to insert 100 million rows of 100 to 1000 bytes with a batch size from 1 million to 10 rows:

for rowsize   in {2..3} ; do
for tablets   in {2..3} ; do
echo "Row size: 1e${rowsize}, Tablets: $((8 * $tablets))"
for batchsize in {6..1} ; do
psql -c "
call setup(
 tab_prefix=>'bench_1e${rowsize}_$((8 * $tablets))_1e',
 tab_tablets=>$((8 * $tablets)),
 ind_tablets=>$((8 * $tablets)),
"  2>&1 >/dev/null | awk 'END{print}'
done ; done ; done | tee log.txt
Enter fullscreen mode Exit fullscreen mode

This uses the power of ten for the batch size and use this number in the table name. I output only the last line, the completion one, that shows the total insertion rate.

I'm running with different row size and number of tablets to be sure that I've no side effects. Here is a except with small rows (100 bytes filler) and 16 tablets:

Row size: 1e2, Tablets: 16
NOTICE:  Table bench_1e2_16_1e0006 Progress:  100.00 % (   100000000 rows) at  108213 rows/s
NOTICE:  Table bench_1e2_16_1e0005 Progress:  100.00 % (   100000000 rows) at  115000 rows/s
NOTICE:  Table bench_1e2_16_1e0004 Progress:  100.00 % (   100000000 rows) at   96681 rows/s
NOTICE:  Table bench_1e2_16_1e0003 Progress:  100.00 % (   100000000 rows) at   49054 rows/s
NOTICE:  Table bench_1e2_16_1e0002 Progress:  100.00 % (   100000000 rows) at   15510 rows/s
NOTICE:  Table bench_1e2_16_1e0001 Progress:  100.00 % (   100000000 rows) at    2335 rows/s
Enter fullscreen mode Exit fullscreen mode

This is how I check that the batch size must be higher than 1000 rows when doing bulk inserts, 10000 when having small rows. But this also depends on your environment and the network between the application and the database node you connect to. That's the idea of the post: show that you can quickly test in a lab. Then, what is important is the cost of this lab.

Resources and Cost

During these tests I have homogeneous load on the 3 kubernetes workers in term of CPU utilization:
The CPU usage is low, 30% of the 30 vCPU (The worker nodes have 15 OCU which is 15 hyperthreaded Intel cores), so 60% of the cores. This provides the best response time and has enough capacity to accept more tablet leaders in case of an Availability Domain failure.

The memory usage:
Having free memory is necessary, especially for heavy writes, as it is used by the filesystem buffers.

Write I/O:
I suspect an error in this chart, it seems to be cumulated. Looking at the Disk Bytes Written per hour metrics for one node:


Read I/O:

Network Bytes Out:

Network Bytes In:

The network traffic between Availability Domains is free.

The cost is mostly for the compute instance:

The cost of those 100000 rows inserted per second was 60 USD per day, with a 3 workers highly available kubernetes cluster. It is easy to calculate from the pricing list:
Compute: 0.025 * 24 jours * 15 cores * 3 instances = 27 CHF
GB/hour: 0.0015 * 24 jours * 240 GB * 3 instances = 26 CHF

When you want to stop this lab, you can delete the Kubernetes cluster, but the load balancers and persistent volumes will stay. You can remove them before with:

helm uninstall yb-demo -n yb-demo
kubectl delete pvc --namespace yb-demo --all
kubectl delete namespace yb-demo
Enter fullscreen mode Exit fullscreen mode

Top comments (0)