With multitenant application, you may have one table storing many customers. You can use PostgreSQL Row Level Security to isolate them. For physical isolation, you can use dedicated partitions (by list) on different tablespaces. With YugabyteDB, this can be sufficient to have them in dedicated nodes, zones or regions. You may also decide to have some customers in different schemas, databases, or even clusters. In this case, moving rows must be done with an INSERT (or COPY) into the destination table, and a DELETE from the source table.
Here is a simple way to do it combining PostgreSQL features: RETURNING clause and WITH clause (Common Table Expression), and Foreign Data Wrapper when this is cross-databases.
I'm running this in YugabyteDB, where all those features are available.
I create a demo table:
create table demo as select tenant_id, id, 0 a, 0 b, 0 c
from generate_series(1,100) tenant_id, generate_series(1,1000) id
;
I create a Foreign Data Wrapper table pointing to it, to show how it works when between different databases:
create extension if not exists postgres_fdw;
create server my_source_fdw foreign data wrapper postgres_fdw
options (host 'yb0.pachot.net', dbname 'yugabyte')
;
create schema my_source_schema;
create user mapping for "yugabyte" server my_source_fdw;
import foreign schema public limit to ( demo )
from server my_source_fdw into my_source_schema
;
I create an empty table with the same structure:
create table demo2 as
select * from my_source_schema.demo
where null is not null --> always false, to read no rows
;
alter table demo2 add primary key (tenant_id, id);
And now, ready to move rows for tenant_id=42
with deleted_rows as (
delete from my_source_schema.demo
where tenant_id=42 --> this is the condition for rows to move
returning *
) insert into demo2 select deleted_rows.* from deleted_rows
;
This supposes that the application doesn't change the state of the rows while they are moved. You can set SERIALIZABLE isolation level (set default_transaction_isolation to serializable
) to have it automatically retried or failed. In some cases, I got Read time should NOT be specified for serializable isolation level
- check #14284
If you cannot afford application downtime on the rows that are moved, there are other solutions: triggers or Change Data Capture.
Top comments (0)