Different ways exist to implement locking mechanisms on arbitrary values. For example, PostgreSQL provides pg_advisory_lock
, MySQL has get_lock
, Oracle Database offers dbms_lock
, and SQL Server use sp_getapplock
. For YugabyteDB, the PostgreSQL version is not currently supported (#3642).
While cross-cluster locks may not be ideal for scaling out, there are simple cases where multiple sessions must synchronize on a shared or exclusive lock.
In such cases, SQL offers a solution to implement this mechanism using a simple table and the default row-locking behavior:
-- my functions are created in the "lock" schema
create schema lock;
-- the following table will hold the keys
create table lock.locked_keys ( key bigint primary key, ts timestamptz default now() );
-- the "lock"."exclusive()" function inserts the key if it doesn't exists, and locks it with a SELECT FOR UPDATE
create function lock.exclusive(key bigint) returns void as $$
begin
insert into lock.locked_keys(key) values (exclusive.key)
on conflict do nothing;
perform from lock.locked_keys
where locked_keys.key=exclusive.key for update;
return;
end;
$$ language plpgsql;
-- the "lock"."share()" function inserts the key if it doesn't exists (disabling transactional writes to not hold an exclusive lock), and locks it with a SELECT FOR SHARE
create function lock.shared(key bigint) returns void as $$
begin
set local yb_disable_transactional_writes=true;
insert into lock.locked_keys(key) values (shared.key)
on conflict do nothing;
set local yb_disable_transactional_writes=false;
perform from lock.locked_keys
where locked_keys.key=shared.key for share;
return;
end;
$$ language plpgsql;
-- the "lock"."unlock()" function removes the key after waiting for concurrent locks (you are supposed to call it when you have acquired the exclusive lock, so it should not wait)
create function lock.unlock(key bigint) returns void as $$
begin
delete from lock.locked_keys
where locked_keys.key=unlock.key;
return;
end;
$$ language plpgsql;
-- the "lock"."vacuum()" removes the keys that may be left by shared locks, in case you used a lot of them and it takes space
create function lock.vacuum() returns void as $$
begin
with l as ( select * from lock.locked_keys for update skip locked )
delete from lock.locked_keys where key in (select key from l);
return;
end;
$$ language plpgsql;
Test: check the locks
I'll test it with two situations, one where the key already exists and one where it is created. For this, I acquire a lock in shared mode. As it is an auto-commit transaction, it is immediately released but remains in the table:
yugabyte=# select lock.shared(1);
shared
--------
(1 row)
yugabyte=# select * from lock.locked_keys ;
key | ts
-----+------------------------------
1 | 2024-05-09 09:53:41.24712+00
(1 row)
In Read Committed isolation level, I can check the locks that are acquired from the pg_locks
view:
yugabyte=# show yb_effective_transaction_isolation_level;
yb_effective_transaction_isolation_level
------------------------------------------
read committed
(1 row)
yugabyte=# -- test shared -> STRONG_READ on row
yugabyte=# begin transaction;
BEGIN
yugabyte=*# select lock.shared(1);
shared
--------
(1 row)
yugabyte=*# select lock.shared(2);
shared
--------
(1 row)
yugabyte=*# \! sleep 1
yugabyte=*# select locktype, database, relation, mode, granted
, ybdetails->'keyrangedetails'->>'cols' as columns
from pg_locks
where relation='lock.locked_keys'::regclass
and locktype='row'
;
locktype | database | relation | mode | granted | columns
----------+----------+----------+--------------------------+---------+---------
row | 13248 | 18234 | STRONG_READ | t | ["2"]
row | 13248 | 18234 | STRONG_READ,STRONG_WRITE | t | ["2"]
row | 13248 | 18234 | STRONG_READ | t | ["1"]
(3 rows)
yugabyte=*# rollback;
ROLLBACK
yugabyte=# -- test exclusive -> STRONG_READ,STRONG_WRITE on row
yugabyte=# begin transaction;
BEGIN
yugabyte=*# select lock.exclusive(1);
exclusive
-----------
(1 row)
yugabyte=*# select lock.exclusive(3);
exclusive
-----------
(1 row)
yugabyte=*# \! sleep 1
yugabyte=*# select locktype, database, relation, mode, granted
, ybdetails->'keyrangedetails'->>'cols' as columns
from pg_locks
where relation='lock.locked_keys'::regclass
and locktype='row'
;
locktype | database | relation | mode | granted | columns
----------+----------+----------+--------------------------+---------+---------
row | 13248 | 18234 | STRONG_READ,STRONG_WRITE | t | ["3"]
row | 13248 | 18234 | STRONG_READ,STRONG_WRITE | t | ["3"]
row | 13248 | 18234 | STRONG_READ,STRONG_WRITE | t | ["1"]
(3 rows)
yugabyte=*# rollback;
ROLLBACK
Test: check the wait
I'll test the combinations of shared and exclusive locks, with a statement_timeout
to check that shared/shared do not wait and the others are blocking.
yugabyte=# -- test shared / shared -> not blocking
yugabyte=# begin transaction;
BEGIN
yugabyte=*# select lock.shared(1);
shared
--------
(1 row)
yugabyte=*# \! psql -c 'set statement_timeout=2000' -c 'select lock.shared(1)'
SET
shared
--------
(1 row)
yugabyte=*# rollback;
ROLLBACK
yugabyte=# begin transaction;
BEGIN
yugabyte=*# select lock.shared(4);
shared
--------
(1 row)
yugabyte=*# \! psql -c 'set statement_timeout=2000' -c 'select lock.shared(4)'
SET
shared
--------
(1 row)
yugabyte=*# rollback;
ROLLBACK
yugabyte=# -- test shared / exclusive -> blocking
yugabyte=# begin transaction;
BEGIN
yugabyte=*# select lock.shared(1);
shared
--------
(1 row)
yugabyte=*# \! psql -c 'set statement_timeout=2000' -c 'select lock.exclusive(1)'
SET
rollback;
begin transaction;
select lock.ERROR: XX000: Perform RPC (request call id 102) to 10.0.0.39:9100 timed out after 4.000s
LOCATION: YBCLockTuple, ../../src/yb/rpc/outbound_call.cc:639
yugabyte=*# rollback;
sharedROLLBACK
yugabyte=# begin transaction;
(5);BEGIN
yugabyte=*# select lock.shared(5);
shared
--------
(1 row)
yugabyte=*# \! psql -c 'set statement_timeout=2000' -c 'select lock.exclusive(5)'
SET
ERROR: XX000: Perform RPC (request call id 108) to 10.0.0.39:9100 timed out after 4.000s
LOCATION: YBCLockTuple, ../../src/yb/rpc/outbound_call.cc:639
yugabyte=*# rollback;
ROLLBACK
yugabyte=# -- test exclusive / shared -> blocking
yugabyte=# begin transaction;
BEGIN
yugabyte=*# select lock.exclusive(1);
exclusive
-----------
(1 row)
yugabyte=*# \! psql -c 'set statement_timeout=2000' -c 'select lock.shared(1)'
SET
ERROR: XX000: Perform RPC (request call id 114) to 10.0.0.39:9100 timed out after 4.000s
LOCATION: YBCLockTuple, ../../src/yb/rpc/outbound_call.cc:639
yugabyte=*# rollback;
ROLLBACK
yugabyte=# begin transaction;
BEGIN
yugabyte=*# select lock.exclusive(6);
exclusive
-----------
(1 row)
yugabyte=*# \! psql -c 'set statement_timeout=2000' -c 'select lock.shared(6)'
SET
rollback;
ERROR: XX000: Perform RPC (request call id 119) to 10.0.0.39:9100 timed out after 4.000s
LOCATION: YBFlushBufferedOperations, ../../src/yb/rpc/outbound_call.cc:639
yugabyte=*# rollback;
ROLLBACK
yugabyte=# -- test exclusive / exclusive -> blocking
yugabyte=# begin transaction;
BEGIN
yugabyte=*# select lock.exclusive(1);
exclusive
-----------
(1 row)
yugabyte=*# \! psql -c 'set statement_timeout=2000' -c 'select lock.exclusive(1)'
SET
ERROR: XX000: Perform RPC (request call id 124) to 10.0.0.39:9100 timed out after 4.000s
LOCATION: YBCLockTuple, ../../src/yb/rpc/outbound_call.cc:639
yugabyte=*# rollback;
ROLLBACK
yugabyte=# begin transaction;
BEGIN
yugabyte=*# select lock.exclusive(7);
exclusive
-----------
(1 row)
yugabyte=*# \! psql -c 'set statement_timeout=2000' -c 'select lock.exclusive(7)'
SET
rollback;
ERROR: XX000: Perform RPC (request call id 129) to 10.0.0.39:9100 timed out after 4.000s
LOCATION: YBFlushBufferedOperations, ../../src/yb/rpc/outbound_call.cc:639
yugabyte=*# rollback;
ROLLBACK
All works as expected. If your application runs with a finite set of lock keys, you can leave them in the table. As I rolled back, only the shared one remained (acquired in non-transactional mode):
yugabyte=# select * from lock.locked_keys;
key | ts
-----+-------------------------------
5 | 2024-05-09 10:03:33.360787+00
4 | 2024-05-09 10:03:17.881847+00
1 | 2024-05-09 09:57:58.186159+00
(3 rows)
By calling unlock()
it removes the key but it will be actually unlocked only when the transaction finishes:
yugabyte=# begin transaction;
BEGIN
yugabyte=*# select lock.exclusive(1);
exclusive
-----------
(1 row)
yugabyte=*# select lock.unlock(1);
unlock
--------
(1 row)
yugabyte=*# commit;
COMMIT
yugabyte=# select * from lock.locked_keys;
key | ts
-----+-------------------------------
5 | 2024-05-09 10:03:33.360787+00
4 | 2024-05-09 10:03:17.881847+00
(2 rows)
The vacuum()
function removes all keys that are not currently locked:
yugabyte=# -- test vacuum -> remove all except currently locked
yugabyte=# begin transaction;
BEGIN
yugabyte=*# select lock.shared(4);
shared
--------
(1 row)
yugabyte=*# \! psql -c 'set statement_timeout=2000' -c 'select lock.vacuum()'
SET
vacuum
--------
(1 row)
yugabyte=*# commit;
COMMIT
yugabyte=# select * from lock.locked_keys;
key | ts
-----+-------------------------------
4 | 2024-05-09 10:03:17.881847+00
(1 row)
I've built it as an example, but I recommend designing it to fit your needs. In many cases, you don't need an additional table because a row in your relational tables represents the business key to lock. For example, if you want only one concurrent transaction to update one customer's orders, you will select for update
the customer row.
Top comments (0)