DEV Community

Cover image for Parallel export to CSV in YugabyteDB thanks to yb_hash_code() and PostgreSQL-compatibility
Franck Pachot for YugabyteDB

Posted on

Parallel export to CSV in YugabyteDB thanks to yb_hash_code() and PostgreSQL-compatibility

When you want to backup the state of the database in case of failure, a dump is not the right solution. A backup from snapshot is safer and faster. However, when the goal is to import it in a future version, or even in another database, pg_dump (or the YugabyteDB version of it ysql_dump) is the right tool. But when the volume is large and you want more flexibility, to parallelize it for example, exporting to multiple CSV files can be a good solution.

This is easy with YugabyteDB, which is PostgreSQL compatible, using the \copy command. Here is a script to generate multiple \copy commands to export to multiple files using yb_hash_code() to reading from a hash-sharded table.

The result is:

                                       copy commands (can be run in parallel)
---------------------------------------------------------------------------------------------------------------------
 \copy (select * from orders where yb_hash_code(order_id) between     0 and  8191) to 'orders_1of8.csv' (format csv)
 \copy (select * from orders where yb_hash_code(order_id) between  8192 and 16383) to 'orders_2of8.csv' (format csv)
 \copy (select * from orders where yb_hash_code(order_id) between 16384 and 24575) to 'orders_3of8.csv' (format csv)
 \copy (select * from orders where yb_hash_code(order_id) between 24576 and 32767) to 'orders_4of8.csv' (format csv)
 \copy (select * from orders where yb_hash_code(order_id) between 32768 and 40959) to 'orders_5of8.csv' (format csv)
 \copy (select * from orders where yb_hash_code(order_id) between 40960 and 49151) to 'orders_6of8.csv' (format csv)
 \copy (select * from orders where yb_hash_code(order_id) between 49152 and 57343) to 'orders_7of8.csv' (format csv)
 \copy (select * from orders where yb_hash_code(order_id) between 57344 and 65535) to 'orders_8of8.csv' (format csv)
(8 rows)

Enter fullscreen mode Exit fullscreen mode

The goal is to run each of them in parallel, when you are sure that there are no writes to the table (to be consistent) and reading from the nearest replica with:

set default_transaction_read_only=on;
set yb_read_from_followers = on;
Enter fullscreen mode Exit fullscreen mode

I generated this with the following query:

select format(
 '\copy (select * from %I where yb_hash_code(%s) between %5s and %5s) to %L (format csv)'
 ,tablename,primarykey,min,max,tablename||'_'
 ||to_char(row_number() over t,'FM99999')
 ||'of'
 ||to_char(count(*)over t,'FM99999')
 ||'.csv'
) as "copy commands (can be run in parallel)" from (select min(h),max(h)
 , 'orders'     as tablename     -----> 'orders' is the name of the table
 , 'order_id'   as primarykey    -----> 'order_id' is the comma-separated list of primary key (hash part)
 from generate_series(0,65535) h
 group by h/(65536/ 8 )         -----> 8 is the number of files
) buckets
window t as (partition by tablename order by min rows between unbounded preceding and unbounded following )
;
Enter fullscreen mode Exit fullscreen mode

You can change orders to your table name, order_id to the comma-separated list of primary key columns in the HASH part, and 8 to the number of files you want to export to. yb_hash_code() returns a number from 0 to 65536 that is used to distribute table rows on the hash-sharding part of the primary key. Each tablet stores a range of this hash code and is sorted on it. This ensures that each copy query reads a single range from the large table.

Of course, you can modify it as your want. For example, generating to a zipped file by changing the template to:

'\copy (select * from %I where yb_hash_code(%s) between %5s and %5s) to program ''(gzip > %I.gz)'' (format csv)'
Enter fullscreen mode Exit fullscreen mode

It can be further improved by looking at the tablet distribution and connect to the one where the leader is. But it is probably sufficient to enable follower reads and just distribute the connection to the servers close to where you export the file.

Always good to check the execution plan to be sure that the predicate on the hash key is properly pushed-down:

yugabyte=# explain(analyze, buffers, dist, costs off)
yugabyte-# (select * from orders where yb_hash_code(order_id) between     0 and  8191);
                                     QUERY PLAN
------------------------------------------------------------------------------------
 Index Scan using orders_pkey on orders (actual time=2.530..2.680 rows=99 loops=1)
   Index Cond: ((yb_hash_code(order_id) >= 0) AND (yb_hash_code(order_id) <= 8191))
   Storage Index Read Requests: 1
   Storage Index Execution Time: 4.000 ms
 Planning Time: 0.062 ms
 Execution Time: 2.747 ms
 Storage Read Requests: 1
 Storage Write Requests: 0
 Storage Execution Time: 4.000 ms
 Peak Memory Usage: 8 kB
(10 rows)
Enter fullscreen mode Exit fullscreen mode

One Read Requests means that I've read from one tablet only, and Index Scan on the pkey is the fastest access (same as an Index Only Scan because the table is stored in the primary key)

Don't forget to export the metadata with ysql_dump -s so that you have everything to import back.

Top comments (0)