DEV Community πŸ‘©β€πŸ’»πŸ‘¨β€πŸ’»

Franck Pachot for AWS Heroes

Posted on

pg_hint_plan with subqueries

As we have seen in the previous posts in this series, you define hints with the table aliases. But what if you don't have aliases? Look at the execution plan for its name, which can be an internal name.

Example:

drop table demo2, demo1;
create table demo1 ( id bigint primary key, a int, b int, d int);
create table demo2 ( id bigint primary key, a int, b int, d int);
insert into demo1 select generate_series(1,1000), 0, 0, 0 ;
vacuum analyze demo1;
vacuum analyze demo2;
insert into demo1 select generate_series(100000,200000), 0, 0, 0 ;
Enter fullscreen mode Exit fullscreen mode

Now I have the following query that takes 1000 rows from table demo1to move them to demo2:

explain (costs off, analyze)
WITH del AS(
 DELETE FROM demo1 a
 WHERE a.id in (SELECT id from demo1 my_batch limit 1000)
 RETURNING *
)  INSERT INTO demo2 
   SELECT  id, a, b c
 FROM del;
Enter fullscreen mode Exit fullscreen mode

Here is the execution plan:

                                               QUERY PLAN

--------------------------------------------------------------------------------------------------------
 Insert on demo2 (actual time=18.627..18.635 rows=0 loops=1)
   CTE del
     ->  Delete on demo1 a (actual time=0.415..17.051 rows=1000 loops=1)
           ->  Hash Semi Join (actual time=0.399..16.534 rows=1000 loops=1)
                 Hash Cond: (a.id = "ANY_subquery".id)
                 ->  Seq Scan on demo1 a (actual time=0.006..10.032 rows=101001 loops=1)
                 ->  Hash (actual time=0.355..0.360 rows=1000 loops=1)
                       Buckets: 1024  Batches: 1  Memory Usage: 79kB
                       ->  Subquery Scan on "ANY_subquery" (actual time=0.006..0.256 rows=1000 loops=1)
                             ->  Limit (actual time=0.003..0.131 rows=1000 loops=1)
                                   ->  Seq Scan on demo1 my_batch (actual time=0.002..0.089 rows=1000 loops=1)
   ->  CTE Scan on del (actual time=0.417..17.282 rows=1000 loops=1)
 Planning Time: 0.247 ms
 Execution Time: 18.702 ms
(14 rows)


postgres=#
Enter fullscreen mode Exit fullscreen mode

This is not very efficient because demo1 is fully scanned first, as the probe table for a hash join. Basically, PostgreSQL has hashed the 1000 id I want to delete but then reads the full table to find them. You may think: your statistics are not accurate, try to run ANALYZE. Ok, but my goal is to delete many rows (by batch of 1000). I don't want to ANALYZE before and after. This is where pg_hint_plan helps. By design of my query (with LIMIT 1000), I want a Nested Loop from those rows in order to get predictable performance (the time depending on the number of rows processed rather than on the table size).

You may think that the join is between demo1 a, the table to delete, and demo1 my_batch, the 1000 ids but if you try to had hints like /*+ Leading( (my_batch a) ) NestLoop(my_batch a) */ you will not be successful.

Look at te execution plan. The Hash Join is between the two following scans: Seq Scan on demo1 a and Subquery Scan on "ANY_subquery". This gives me the aliases I must use for the join order, direction and methd: a and "ANY_subquery"

If I want to force a Nested Loop, I can add the hint NestLoop("ANY_subquery" a) but this just says that the join between the two tables should be a Nested Loop Join. The order of the aliases in this hint doesn not matter. We need a Leading hind with the join pair: Leading( ("ANY_subquery" a) )

explain (costs off, analyze)
/*+ Leading( ("ANY_subquery" a) ) NestLoop("ANY_subquery" a) */
WITH del AS(
 DELETE FROM demo1 a
 WHERE a.id in (SELECT id from demo1 my_batch limit 1000)
 RETURNING *
)  INSERT INTO demo2 
   SELECT  id, a, b c
 FROM del;
Enter fullscreen mode Exit fullscreen mode

This is the plan I wanted:

                                                  QUERY PLAN

---------------------------------------------------------------------------------------------------------------
 Insert on demo2 (actual time=4.267..4.268 rows=0 loops=1)
   CTE del
     ->  Delete on demo1 a (actual time=0.521..2.219 rows=1000 loops=1)
           ->  Nested Loop (actual time=0.508..1.701 rows=1000 loops=1)
                 ->  HashAggregate (actual time=0.492..0.641 rows=1000 loops=1)
                       Group Key: "ANY_subquery".id
                       Batches: 1  Memory Usage: 193kB
                       ->  Subquery Scan on "ANY_subquery" (actual time=0.069..0.305 rows=1000 loops=1)
                             ->  Limit (actual time=0.065..0.177 rows=1000 loops=1)
                                   ->  Seq Scan on demo1 my_batch (actual time=0.064..0.134 rows=1000 loops=1)
                 ->  Index Scan using demo1_pkey on demo1 a (actual time=0.001..0.001 rows=1 loops=1000)
                       Index Cond: (id = "ANY_subquery".id)
   ->  CTE Scan on del (actual time=0.523..2.526 rows=1000 loops=1)
 Planning Time: 0.231 ms
 Execution Time: 4.362 ms
(15 rows)
Enter fullscreen mode Exit fullscreen mode

Do you think I am happy because it is faster, 4.362 ms vs. 18.702 ms? Not at all. I don't care about those milliseconds. What makes me preferring this plan is the scalability: no operation here depends on the size of the source table. Thanks to my hint, I guarantee an O(1) complexity with no risk to switch to an O(N) none because of stale statistics.

This example runs on PostgreSQL (with pg_hint_plan extensions) or compatible (like YugabyteDB where pg_hint_plan is installed by default)

Top comments (0)

Let's Get Hacking

Join the DEV x Linode Hackathon 2022 and use your ingenuity and creativity to build using Linode.

β†’ Join the Hackathon <-