DEV Community

Discussion on: What exactly is big O, big Θ, big Ω ?

Collapse
 
codemouse92 profile image
Jason C. McDonald • Edited

In all three notations, you're measuring the growth of the running time as n increases. The difference between them is how much information you have.

Big-Θ is bounded on both ends, meaning you know that the running time should (generally) be no greater than, say, k₂ * n² + 3n, and no less than k₁ * n² + 3n. In fact, you're measuring your worst-case scenario with some certainty here. We'd say, then, that the algorithm in question is Θ(n²), since we throw away the constants (k₁ and k₂) and lower order terms (3n).

The trouble is, most algorithms aren't always going to run worst-case every time. If we only want to measure the upper-bound, but just suffice to say "it's never greater than this, but it could less", that's what big-O is for; it measures the asymptotic upper bounds.

So, in our preceding example, if we know worst-case is k₁ * n² + 3n, or Θ(n²), but the best-case is something less than that, like Θ(n), suddenly big-Θ is no longer useful. We can't express bounds like that. Thus, we instead say the algorithm is O(n²): the runtime won't be greater than , but it could be less (n, in this case).

In other words, is the upper bound, but the lower bound could be below that. Big-O does not define a lower bound. The purpose of big-O, then, is to admit that we are leaving some information out.

To put that yet another way, big-O is accurate for all cases by sacrificing part of its precision. You're saying "it won't be any worse than this". Unlike with big-Θ, you've eschewed the lower bound.

Big-Ω, then, is the exact opposite. You're providing the asymptomatic lower bound, but omitting the upper bound. You're saying "it won't be any better than this, but it could be worse." If the example algorithm has a best-case runtime of Ω(n), we could accurately say that the algorithm is Ω(n).

This doesn't mean that Ω(n) is automatically the measurement of the best-case. Unless otherwise stated, we are always describing the worst-case scenario, as that's the most useful thing to measure in algorithmic efficiency.

You can probably see, then, why we almost always only use big-O in discussing algorithms: it's the only one of the three to be accurate for all cases and denote the worst-case scenario. Big-Θ is only useful for denoting specific scenarios, or in the rare algorithm where O(f(n)) == Ω(f(n)).

In any case, our goal is ultimately to achieve the most accuracy and precision possible in our description of an algorithm or a scenario. Of course, since we can't accurately describe most algorithms with just big-Θ, we'll usually throw away the lower bound and describe it in terms of big-O instead.


Side note: This means that a lot of people use this wrong. Ever seen this?

The algorithm has a best-case of O(n), and a worst-case of O(n³).

That's probably not what is meant, because in both cases, the only the upper-bound of each scenario is being denoted. I'm fairly certain the lower-bound of the algorithm is known if the best-case scenario is being described.

If both bounds of a scenario are known, we should be more precise. We could should generally use big-Θ instead if we know both:

The algorithm has a worst-case of Θ(n³) and a best-case of Θ(n).

Or, if we only know the upper bound of the worst-case and lower bound of the best-case, we could use big-O and big-Ω. Just be sure to denote when you're not describing worst-case!

The algorithm has a worst-case of O(n³) and a best-case of Ω(n).

That nicely fences in the entire algorithm.