DEV Community

loading...
Cover image for Rule-based Query Optimization

Rule-based Query Optimization

Vladimir Ozerov
Vladimir Ozerov is a founder of Querify Labs, where he manages the research and development of innovative data management products for technology companies.
Originally published at querifylabs.com ・7 min read

The goal of the query optimizer is to find the query execution plan that computes the requested result efficiently. In this blog post, we discuss rule-based optimization - a common pattern to explore equivalent plans used by modern optimizers. Then we explore the implementation of several state-of-the-art rule-based optimizers. Then we analyze the rule-based optimization in Apache Calcite, Presto, and CockroachDB.

Transformations

A query optimizer must explore the space equivalent execution plans and pick the optimal one. Intuitively, plan B is equivalent to plan A if it produces the same result for all possible inputs.

To generate the equivalent execution plans, we may apply one or more transformations to the original plan. A transformation accepts one plan and produces zero, one, or more equivalent plans. As a query engine developer, you may implement hundreds of different transformations to generate a sufficient number of equivalent plans.

Some transformations operate on bigger parts of the plan or even the whole plan. For example, an implementation of the join order selection with dynamic programming may enumerate all joins in the plan, generate alternative join sequences, and pick the best one.

Dynamic Programming

Other transformations could be relatively isolated. Consider the transformation that pushes the filter operator past the aggregate operator. It works on an isolated part of the tree and doesn't require a global context.

Filter Pushdown

Rules

Every optimizer follows some algorithm that decides when to apply particular transformations and how to process the newly created equivalent plans. As the number of transformations grows, it becomes not very convenient to hold them in a monolithic routine. Imagine a large if-else block of code that decides how to apply a hundred transformations to several dozens of relational operators.

To facilitate your engine's evolution, you may want to abstract out some of your transformations behind a common interface. For every transformation, you may define a pattern that defines whether we can apply the transformation to the given part of the plan. A pair of a pattern and a transformation is called a rule.

The rule abstraction allows you to split the optimization logic into pluggable parts that evolve independently of each other, significantly simplifying the development of the optimizer. The optimizer that uses rules to generate the equivalent plans is called a rule-based optimizer.

Rule-based Optimization

Notice that the rules are, first of all, a pattern that helps you decompose the optimizer's codebase. The usage of rules doesn't force you to follow a specific optimization procedure, such as Volcano/Cascades. It doesn't prevent you from using particular optimization techniques, like dynamic programming for join enumeration. It doesn't require you to choose between heuristic or cost-based approaches. However, the isolated nature of rules may complicate some parts of your engine, such as join planning.

Examples

Now, as we understand the idea behind the rule-based optimization, let's look at several real-world examples: Apache Calcite, Presto, and CockroachDB.

Apache Calcite

Apache Calcite is a dynamic data management framework. At its core, Apache Calcite has two rule-based optimizers and a library of transformation rules.

The HepPlanner is a heuristic optimizer that applies rules one by one until no more transformations are possible.

The VolcanoPlanner is a cost-based optimizer that generates multiple equivalent plans, put them into the MEMO data structure, and uses costs to choose the best one. The VolcanoPlanner may fire rules in an arbitrary order or work in a recently introduced Cascades-like top-down style.

The rule interface accepts the pattern and requires you to implement the onMatch(context) method. This method doesn't return the new relational tree as one might expect. Instead, it returns void but provides the ability to register new transformations in the context, which allows you to emit multiple equivalent trees from a single rule call. Apache Calcite comes with an extensive library of built-in rules and allows you to add your own rules.

class CustomRule extends RelOptRule {
    new CustomRule() {
        super(pattern_for_the_rule);
    }

    void onMatch(RelOptRuleCall call) {
        RelNode equivalentNode = ...;

        // Register the new equivalent node in MEMO
        call.transformTo(equivalentNode);
    }
}
Enter fullscreen mode Exit fullscreen mode

In Apache Calcite, you may define one or more optimization stages. Every stage may use its own set of rules and optimizer. Many products based on Apache Calcite use multiple stages to minimize the optimization time at the cost of the possibility of producing a not optimal plan. See our previous blog post for more details on how to create a query optimizer with Apache Calcite.

Let's take a look at a couple of rules for join planning. To explore all bushy join trees, you may use JoinCommuteRule and JoinAssociateRule. These rules are relatively simple and work on individual joins. The problem is that they may trigger duplicate derivations, as explained in this paper.

Join Commute and Associate Rules

Alternatively, Apache Calcite may use a set of rules that convert multiple joins into a single n-way join and then apply a heuristic algorithm to produce a single optimized join order from the n-way join. This is an example of the rule, that works on a large part of the tree, rather than individual operators. You may use a similar approach to implement the rule to do the join planning with dynamic programming.

Multi-join Rule

The Apache Calcite example demonstrates that the rule-based optimization could be used with both heuristic and cost-based exploration strategies, as well as for complex join planning.

Presto

Presto is a distributed query engine for big data. Like Apache Calcite, it uses rules to perform transformations. However, Presto doesn't have a cost-based search algorithm and relies only on heuristics when transitioning between optimization steps. See our previous blog for more details on Presto query optimizer.

As Presto cannot explore multiple equivalent plans at once, it has a simpler rule interface that produces no more than one new equivalent tree.

interface Rule {
    Pattern getPattern();
    Result apply(T node, ...);
}
Enter fullscreen mode Exit fullscreen mode

Presto also has several rules that use costs internally to explore multiple alternatives in a rule call scope. An example is a (relatively) recently introduced ReorderJoins rule. Similar to the above-mentioned Apache Calcite's n-way join rules, the ReorderJoins rule first converts a sequence of joins into a single n-way join. Then the rule enumerates equivalent joins orders and picks the one with the least cost (unlike Apache Calcite's LoptOptimizerJoinRule, which uses heuristics).

The ReorderJoins rule is of particular interest because it demonstrates how we may use rule-based optimization to combine heuristic and cost-based search strategies in the same optimizer.

CockroachDB

CockroachDB is a cloud-native SQL database for modern cloud applications. It has a rule-based Cascades-style query optimizer.

Unlike Apache Calcite and Presto, Cockroach doesn't have a common rule interface. Instead, it uses a custom DSL to define the rule's pattern and transformation logic. The code generator analyzes the DSL files and produces a monolithic optimization routine. The code generation may allow for a faster optimizer's code because it avoids virtual calls when calling rules.

Below is an example of a rule definition that attempts to generate a streaming aggregate. Notice that you do not need to write the whole rule logic using DSL only. Instead, you may reference utility methods written in Go (which is CockroachDB primary language) from within the rule to minimize the amount of DSL-specific code.

[GenerateStreamingGroupBy, Explore]
(GroupBy | DistinctOn | EnsureDistinctOn | UpsertDistinctOn
        | EnsureUpsertDistinctOn
    $input:*
    $aggs:*
    $private:* & (IsCanonicalGroupBy $private)
)
=>
(GenerateStreamingGroupBy (OpName) $input $aggs $private)
Enter fullscreen mode Exit fullscreen mode

There are two rule types in CockroachDB. The normalization rules convert relational operators into canonical forms before being inserted into a MEMO, simplifying the subsequent optimization. An example is a NormalizeNestedAnds rule that normalizes AND expressions into a left-deep tree. The normalization is performed via a sequential invocation of normalization rules. The second category is exploration rules, which generate multiple equivalent plans. The exploration rules are invoked using the cost-based Cascades-like top-down optimization strategy with memorization.

CockroachDB has a ReorderJoins rule to do the join planning. The rule uses a variation of the dynamic programming algorithm described in this paper to enumerate the valid join orders and add them to MEMO.

Thus, CockroachDB uses rule-based optimization for heuristic normalization, cost-based exploration, and join planning with dynamic programming.

Summary

Rule-based query optimization is a very flexible pattern that you may use when designing a query optimizer. It allows you to split the complicated transformation logic into self-contained parts, reducing the optimizer's complexity.

The rule-based optimization doesn't limit you in how exactly to optimize your plans, be it bottom-up dynamic programming or top-down Cascades-style exploration, cost-based or heuristic optimization, or anything else.

In future posts, we will discuss the difference between logical and physical optimization. Stay tuned!

Discussion (0)