One of the languages that Postgres offers for writing stored procedures is pl/pgsql. One of it's features is exception
block that is used to catch errors raised in a code block. Appropriate use of this construct can improve code reliability, but when overused it can lead to decreased performance.
The official documentation says the following:
https://www.postgresql.org/docs/current/plpgsql-control-structures.html#PLPGSQL-ERROR-TRAPPING
A block containing an EXCEPTION clause is significantly more expensive to enter and exit than a block without one. Therefore, don't use EXCEPTION without need.
But what exactly significantly more expensive
means? Is it 50% slower? 100% slower? Let's check this out.
First, we'll need a table with some data. I'll use the following one:
create table ops(id bigint, body jsonb);
I filled this table with about 20 million rows. Each row contains some json data in body
column. The exact structure varies, but there's a common field: all stored jsons are objects containing value
key.
Having this, I'll do a simple processing of that data.
For each row in that table I will invoke the following function:
create or replace function process_op(op ops)
returns void volatile language 'plpgsql'
as $$
begin
insert into data(id, body) select op.id, op.body->>'value';
end;
$$;
This basically extracts the value
element and writes it into a separate table data
, which has the following structure:
create table data(id bigint, body text);
I will compare obtained results with a second scenario, which will be identical to the first one, with the exception that the following function will be used instead:
create or replace function process_op_exn(op ops)
returns void volatile language 'plpgsql'
as $$
begin
begin
insert into data(id, body) select op.id, op.body->>'value';
exception WHEN division_by_zero then
end;
end
$$;
The only difference is extra exception
block that catches specified exception type. The exact type doesn't matter here, because it will never occur. I have to specify it nonetheless, so it'll be division_by_zero
.
Now, here's the script that will do the timing of our processing:
#!/bin/sh
sql() {
psql -c "$@" ex
}
trunc() {
sql 'truncate data'
}
trunc
echo 'process_op'
sql 'set jit=off; explain analyze select process_op(op) from ops as op'
trunc
echo 'process_op_exn'
sql 'set jit=off; explain analyze select process_op_exn(op) from ops as op'
This script will use an empty database ex
that only contains our two functions and the input and output tables. Before each test, the output table is truncated.
Note that jit
is disabled. It's because it'd just introduce additional variance in timing results.
I'll run this script 10 times and collect the time reported by explain analyze
.
Here's the times I got on my machine: (PostgreSQL 14.10 (Ubuntu 14.10-0ubuntu0.22.04.1))
process_op | process_op_exn |
---|---|
97213.305 | 134802.694 |
97133.119 | 134685.843 |
104139.789 | 141188.065 |
103916.659 | 141514.376 |
104230.254 | 140969.064 |
104054.297 | 140396.022 |
104578.363 | 140755.807 |
104225.124 | 140159.735 |
104693.086 | 136347.270 |
100057.957 | 142008.653 |
process_op | process_op_exn | overhead[%] | |
---|---|---|---|
min | 97133.119 | 134685.843 | 38.66 |
max | 104693.086 | 142008.653 | 35.64 |
avg | 102424.195 | 139282.753 | 35.99 |
stddev | 3070.819 | 2845.707 |
I repeated the measurements for half of the data size (so about 10 million rows):
process_op | process_op_exn |
---|---|
54302.993 | 76026.787 |
54081.487 | 72822.546 |
53893.595 | 79227.740 |
116475.741 | 71090.325 |
52185.475 | 71807.784 |
55144.720 | 79174.003 |
59353.608 | 123744.166 |
59466.183 | 77938.334 |
77143.960 | 83018.802 |
61518.751 | 74555.020 |
process_op | process_op_exn | overhead[%] | |
---|---|---|---|
min | 52185.475 | 71090.325 | 36.23 |
max | 116475.741 | 123744.166 | 6.24 |
avg | 64356.651 | 80940.551 | 25.77 |
stddev | 19680.908 | 15501.945 |
And here's measurements repeated for double the data size (about 40 million rows):
process_op | process_op_exn |
---|---|
286233.511 | 297223.475 |
284152.678 | 351405.713 |
216810.749 | 389101.092 |
295131.977 | 297492.770 |
204913.139 | 274064.567 |
202276.802 | 331195.117 |
266659.591 | 316588.293 |
272660.505 | 336403.196 |
299557.913 | 308608.560 |
210020.255 | 364334.027 |
process_op | process_op_exn | overhead[%] | |
---|---|---|---|
min | 202276.802 | 274064.567 | 35.49 |
max | 299557.913 | 389101.092 | 29.89 |
avg | 253841.712 | 326641.681 | 28.68 |
stddev | 40312.450 | 34891.699 |
The timings are pretty consistent and we clearly see that the function with exception
block takes longer time to complete.
The slowdown due to the exception block is about 35-38%. This overhead is definitely noticeable, but it'll of course depend on the work load. The more work is performed in such block, the less noticeable it'll be. The important part is that the performance hit is taken even if the exception is never raised.
EDIT 02-02-2024: Added measurements for half and double data set sizes.
Top comments (1)
With an exception block PostgreSQL takes a savepoint before in order to rollback when entering the exception block, and savepoints are quite expensive in current versions of PostgreSQL