DEV Community

Franck Pachot for YugabyteDB

Posted on • Updated on

Adding a PostgreSQL extension to YugabyteDB - example with timestamp9

YugabyteDB is PostgreSQL-compatible, which includes CREATE EXTENSION. The most interesting extensions are pre-bundeled, and this includes those from PostgreSQL contrib, but also some external ones that never made it to core PostgreSQL. Other extensions can be installed, like in PostgreSQL, with few specificities for YugabyteDB:

  • They must be installed on all nodes, in the ./postgres/share/extension and ./postgres/lib subdirectories of the YugabyteDB installation
  • If they need a shared_preload_libraries this is done with the --ysql_pg_conf_csv flag
  • They must be tested. The extensions that interact with the SQL layer only should work as-is because YugabyteDB is based on a fork of PostgreSQL but the extension may not be compatible with the distributed storage and transaction layer of YugabyteDB

All this is documented. This blog post shows a quick way to build and test an extension. I'm using an example: timestamp9 is a PostgreSQL extension to add a nanosecond precision timestamp datatype, similar to the Oracle Database TIMESTAMP(9) or Db2 TIMESTAMP.

The idea is to build the extension files (.control, .so, .control) with PostgreSQL 11.2 devel environment and copy them to YugabyteDB which is compatible with PostgreSQL 11.2 (of course this will need to be updated when YugabyteDB will merge with newer versions.

Build

I do all that in a docker container to get an isolated environnement easy to re-test from scratch, and thanks to the layering of the image, I can troubleshoot by layers without re-starting all.

I build the extension in a staging container and then copy the files to the target YugabyteDB image.


cat > Dockerfile <<'DOCKERFILE'

# I build the extension is the same environement as the target yugabytedb
FROM yugabytedb/yugabyte:latest as build_extension
# Updating all packages and installing development packages
RUN yum update -y
RUN yum groupinstall -y 'Development Tools'
# installing postgresql11-devel (same version as YugabyteDB comptibility)
# (needs centos-release-scl-rh for llvm)
RUN yum install -y https://download.postgresql.org/pub/repos/yum/reporpms/EL-7-x86_64/pgdg-redhat-repo-latest.noarch.rpm
RUN yum install -y centos-release-scl-rh
RUN yum install -y postgresql11-devel
# add pg_config to the path
ENV PATH="/usr/pgsql-11/bin:${PATH}"
# removing all files in pkglibdir sharedir/extension to easily tar the new ones
RUN rm -rf /usr/pgsql-11/lib/* /usr/pgsql-11/share/extension/*
#############################################################
# building timestamp9
#############################################################
WORKDIR /var/tmp
RUN yum install -y git cmake3
RUN git clone https://github.com/fvannee/timestamp9.git
RUN mkdir build
WORKDIR /var/tmp/timestamp9/build
RUN cmake3 ..
RUN make
RUN make install
#############################################################
# packing all new files into a tar for easy COPY --from
WORKDIR /usr/pgsql-11/
RUN tar -cvf extensions.tar lib share/extension
# now building the target container
FROM yugabytedb/yugabyte:latest
# copying the extention files into YugabyteDB
WORKDIR /home/yugabyte/postgres
COPY --from=build_extension /usr/pgsql-11/extensions.tar .
RUN tar -xvf extensions.tar
WORKDIR /home/yugabyte
DOCKERFILE

docker build -t yb-extensions .
Enter fullscreen mode Exit fullscreen mode

Here is a sample output on my laptop (with all layers already in cache):
Image description

I can use this image directly, or get the .tar to be extracted in the postgres subdirectory of any YugabyteDB installation with:

docker run --rm -v "$PWD:/export" yb-extensions \
 cp /home/yugabyte/postgres/extensions.tar /export
Enter fullscreen mode Exit fullscreen mode

Image description

Test it!

For a new datatype, I want to test some simple cast operations, which probably has no issues as it is in the SQL layer only. I also want to test the compatibility with the YugabyteDB storage in LSM-Tree by creating a table and index on this datatyte.

Here is the full test using my docker image:

docker run --rm yb-extensions bash -c "
yugabyted start --listen 0.0.0.0
until postgres/bin/pg_isready ; do sleep 1 ; done | uniq
ysqlsh -e <<'SQL'

-- extension creation
create extension timestamp9;

-- cast from bigint to timestamp9
select 1671926399123501311::timestamp9 as christmas;

-- creation of table with timestamptz datatype
create table demo (id bigint primary key, ts timestamp9);
insert into demo values(0,1671926399123501311);
insert into demo select n , (
 (extract(epoch from now())+random())*1e9
 )::bigint::timestamp9 as ts
from generate_series(1,1000000) n;

-- test predicate pushdown
set yb_enable_expression_pushdown=on;
explain (costs off, analyze)
 select * from demo 
 where ts>'2022-12-24 23:59:59.123501311 +0000';
-- test indexing timestamp9
create index demo_ts on demo(ts asc);

-- test indexing ::timestamptz
create index demo_ts on demo((ts::timestamptz) asc) include(ts);
explain (costs off, analyze)
 select * from demo 
 where ts::timestamptz>'2022-12-24 23:59:59.123501311 +0000';

-- test indexing ::bigint
create index demo_ts on demo((ts::timestamptz) asc) include(ts);
explain (costs off, analyze)
 select * from demo 
 where ts::timestamptz>'2022-12-24 23:59:59.123501311 +0000';

SQL
"
Enter fullscreen mode Exit fullscreen mode

The basic operations work without the need for additional changes, thanks to the PostgreSQL-compatibility of the SQL alyer:

Image description

However, the plan shows that there's no predicate push-down:

                             QUERY PLAN
--------------------------------------------------------------------
 Seq Scan on demo (actual time=2942.488..2942.489 rows=0 loops=1)
   Filter: (ts > '2022-12-24 23:59:59.123501311 +0000'::timestamp9)
   Rows Removed by Filter: 1000000
Enter fullscreen mode Exit fullscreen mode

With the native timestamptz I would have seen Remote Filter instead of the PostgreSQL Filter with Rows Removed by Filter.

Additionally, the index was not created, because the new operator has no implementation for the LSM-Tree access method:

create index demo_ts on demo(ts asc) include(ts);
ERROR:  data type timestamp9 has no default operator class for access method "lsm"
HINT:  You must specify an operator class for the index or define a default operator class for the data type.
Enter fullscreen mode Exit fullscreen mode

If you need support for indexing a timestamp9 you can open a git issue to get this extension built-in. This is probably only for compatibility with other databases, like Oracle or Db2, because you can also store your nanoseconds as bigint and use the timestamp9 extension only to cast and use the functions provided with it.

Another possibility if you want to index for range query on the timestamp9 datatype is to create an index on the timestamptz:

yugabyte=# create index demo_ts_tz 
 on demo((ts::timestamptz) asc) include(ts);
CREATE INDEX

yugabyte=# explain (costs off, analyze, dist)
 select * from demo
 where ts::timestamptz>'2022-12-24 23:59:59.123501311 +0000';

                                                 QUERY PLAN
------------------------------------------------------------------------------------------------------------
 Index Scan using demo_ts_tz on demo (actual time=1.218..1.218 rows=0 loops=1)
   Index Cond: ((ts)::timestamp with time zone > '2022-12-24 23:59:59.123501+00'::timestamp with time zone)
   Storage Index Read Requests: 1
   Storage Index Execution Time: 0.000 ms
 Planning Time: 5.392 ms
 Execution Time: 1.376 ms
 Storage Read Requests: 1
 Storage Write Requests: 0
 Storage Execution Time: 0.000 ms
 Peak Memory Usage: 8 kB
(10 rows)
Enter fullscreen mode Exit fullscreen mode

This is optimized as one Storage Index Read Requests that can seek() directly to the first key in the LSM-Tree.

You can also index on the nanoseconds from epoch rather than a timestamp, for range or point queries:

yugabyte=# create index demo_ts_bi 
 on demo((ts::bigint) hash) include(ts);
CREATE INDEX

yugabyte=# explain (costs off, analyze, dist)
 select * from demo
 where ts::bigint='2022-12-24 23:59:59.123501311 +0000'::timestamp9::bigint;

                                  QUERY PLAN
-------------------------------------------------------------------------------
 Index Scan using demo_ts_bi on demo (actual time=1.479..1.483 rows=1 loops=1)
   Index Cond: ((ts)::bigint = '1671926399123501311'::bigint)
   Storage Index Read Requests: 1
   Storage Index Execution Time: 1.000 ms
   Storage Table Read Requests: 1
   Storage Table Execution Time: 1.000 ms
 Planning Time: 0.113 ms
 Execution Time: 1.533 ms
 Storage Read Requests: 2
 Storage Write Requests: 0
 Storage Execution Time: 2.000 ms
 Peak Memory Usage: 0 kB
(12 rows)
Enter fullscreen mode Exit fullscreen mode

In summary, thanks to the architecture of YugabyteDB re-using PostgreSQL code rather than re-implementing 20 years of SQL, a ton of features, tools and extensions from the PostgreSQL ecosystem is easily available. When there is a need to adapt it to the distributed storage in LSM-Trees, it can be done:

  • either by using all the features already there (like expression index here)
  • or though additional support in the YugabyteDB code, which is fully Open Source

Top comments (0)