DEV Community

Cover image for Algorithmic Complexity Basics
Ali Sherief
Ali Sherief

Posted on

Algorithmic Complexity Basics


It's not just the running time; it's the space usage too.

We see algorithms used in pretty much every program that’s larger than a college project. These algorithms sometimes power necessary functionality inside these programs. That's why it is essential to make sure that they are running efficiently. Even in situations where execution speed doesn't matter much, it feels good to know that you are not coding a highly inefficient algorithm implementation.

Before we begin this article, it is essential to note that memory usage is just as crucial as its runtime. I think this bites new programmers more often than speed inefficiencies because the consequences of exhausting system memory or using so much that there isn't a reasonable amount left for other running processes can be fatal. By contrast, the only setback people suffer from a slow program is just time wasted waiting.

Complexity and Big-O notation

Runtime and memory usage are measured using a metric called complexity portrayed in Big-O notation, simply the letter “O” followed by an expression in brackets. There are several different variants of Big-O, but by far, the O representation is the most commonly used. Inside the parentheses is a variable such as n, representing the input size to the algorithm. Of course, it is possible to use more than one variable inside the parentheses to define other factors and constants.

Specifically, O(n) means that the algorithm runtime or memory usage approaches n units as the input size n itself approaches infinity. The words used in the definition are very important. Big-O means that we equal an equal or shorter time or space measure at infinite input size. Little-O o(n) means the function time or space is strictly less than the complexity expression. While Big-Theta Θ(n) notation means it exactly equals the complexity.

You might as see people using exclusively Big-O, but having three different expressions, each referring to best case, average, and worst-case complexities. I think that’s an easier way to understand this concept.

For all the above, the measures of functions are still valid if the complexity expression is multiplied by an unknown constant, in other words. O-notation doesn't care whether the complexity is n or 30n because the complexity in both cases increases honestly.

This brings us to the complexity expressions themselves.

Expressions

Here are the complexities you will see most often with Big-O notation, with some examples of each:

  • Constant: O(1) - Means the time or space is constant regardless of the input size.

Example: Array access

  • Logarithmic: O(log(n)) - The time/space is logarithmic with respect to the input size.

Example: Binary search

  • Linear: O(n) - The time/space is linear to the input size.

Example: Iteration and Linear search

  • Linearithmic or Superlinear: O(n log(n)) - The time/space is linear multiplied by log(n) with respect to the input size.

Example: Quicksort, Mergesort, Heapsort

  • Quadratic: O(n**2) - The time/space is quadratic to the input size, i.e., for a given input size n the runtime or memory usage is squared.

Example: Insertion sort

  • Cubic: O(n**3) - The time/space is cubic to the input size.

Example: Strassen’s Matrix Multiplication

  • Exponential: O(2**n) - The time/space is exponential to the input size. Note that the 2 is a constant which any other number can replace.

Example: Traveling Salesman Problem

Clearly, from the above list, you can see that exponential complexity is the worst possible runtime or space usage, while constant runtime is the best. Everyone wants to use algorithms that have constant complexity, but in most cases, this is not possible because there are no known implementations of those algorithms that have that simple complexity.

Generally speaking, each algorithm has a minimum complexity which corresponds to the most efficient known implementation of that algorithm.


So, that’s all about algorithmic complexity. Thanks for reading.

Oldest comments (0)