DEV Community

Mayur Patil
Mayur Patil

Posted on

All About Big O

Big O notation is the most used metric to denote efficiency of algorithms. It gives us an idea about the worst case scenario of time required or space required for an algorithm to run.

An Analogy to Understand Big O notation

Lets look at an everyday problem where we unknowingly use similar tactics to determine which would be the best solution to our problem.
Suppose a person is looking to buy a book, he has two ways to get it, to order it online or to visit a shop nearby and buy it.
First, he checks and finds that the online order will take 2 - 4 days to be delivered. Second he looks for a shop nearby and visiting the shop to buy the book will take him just one day.
The best solution will be to buy it from the store if all he cares is about time.
If we use Big O to denote the time taken:

  1. Buying online: O(4) i.e. In the worst case it will take him 4 days to get the book
  2. Buying from the shop: O(1) i.e. It will take him one day if he visits the book store.

Big O has other siblings too, together they describe how efficient is an algorithm in terms of time and space requirement for running.

Big-Omega (Ω): Big Omega notation is used as a metric to denote the best case scenario of the time required or the space required for an algorithm to run. It although is not as widely used as Big O, because in the end what matters it how the algorithm performs in worst case scenarios, it still is an important metric.
For our ordering a book online we had an expected delivery of 2 - 4 days, so in the best case it could take just 2 days, it can be said that the solution will take Ω(2) time

Big-Theta (θ): Just as Big Omega gives us an idea of how the algorithm performs in the best case scenario, i.e. lower bound on the measurement, and Big O gives us an idea of how the algorithm performs in the worst case scenario, i.e. upper bound on the measurement, Big Theta gives us the tight bound on that measurement. An algorithm is θ(N) if it is both O(N) and Ω(N). For our buying the book from store solution, it will always take 1 day i.e. both O(1) and Ω(1) time. Therefore we can say it will take θ(1) time. In an academic setting people tend to use these three metrics differently however in an industrial setting Big O and Big Theta are generally treated as the same.

Study of these for analysis of performance is called as asymptotic analysis of the algorithm, and the notations are called as asymptotic notations or Bachman-Landau notations.

Typically asymptotic analysis is done for measuring the time an algorithm can take to run, or the space it requires for running, they are referred to as time complexity analysis and space complexity analysis respectively

Examples

Lets look at some basic examples on how to calculate Big O of algorithms.

  • The following code finds out the smallest value in the array
int min = 0;

for (int x : array) {
   if (x > min) {
     min = X;
   }
}
Enter fullscreen mode Exit fullscreen mode

The algorithm iterates for every element in the array.
Time Complexity: 0(N), where N = elements in the array
Space Complexity: 0(1), because the algorithm is dealing at just one element per iteration, resets then begins with the same process with the new element.

NOTE: Big O just have to describe the rate of increase i.e. if the data were to be increase, how does the algorithm perform, in the example if the array size was increased, the time complexity increases as well, whereas space complexity would still be the same

  • The following example calculates sum of N consecutive elements
int sum(int n) {
  if (n <= a) {
    return 0;
  }
  return n + sum(n-1);
}
Enter fullscreen mode Exit fullscreen mode

For finding runtime complexities of recursive algorithms always visualize the call stack as a tree, with N at the root, it has a child N - 1, which has a child N - 2 and so on till 3, 2, 1
Therefore the above algorithm runs N times
Time Complexity: O(N), where N is the number passed
Space Complexity: O(N), this is because the call stack increases for every iteration till it reaches 1, the space requirement increases linearly with the number of elements

Because the Big O complexity just has to describe how the algorithms complexity increases as the variables it deals with increase, we can simplify the complexity by dropping certain items like constants and non-dominant terms

Dropping Constants

Consider the following example where, for every iteration there are two lines, it finds the minimum and maximum element of an array

int min = Integer.MAX_VALUE;
int max = Integer.MIN_VALUE;
for (int x : array) {
    if (x < min) min = x;
    if (x > max) max = x;
}
Enter fullscreen mode Exit fullscreen mode

Technically calling this algorithm to have a runtime of O(2N) won't be wrong, however because we just want Big O to denote the rate of increase we can drop the 2 and the Big O complexity becomes O(N), even if there were 1000 lines of code for each iteration, we'd treat the them as one unit of work as long as the work is constant for each iteration, same goes for runtimes of O(N/2), we treat them as O(N)

Dropping Non-Dominant Terms

What do we do about an expression such as 0 ( N2 + N) . That second N isn't exactly a constant. But it's not especially important given that we have a term which largely dominates for almost all of the possible values for N, i.e. N2. For N = 12, the complexity is O(144 + 12), so we can drop the non dominant terms

Algorithms with Multiple Variables

In various problems, the algorithms can have more than one runtime depending on different variables, consider the following problem with two arrays, it simply prints the value of its elements

for (int a : arrayA) {
  print(a);
}

for (int b : arrayB) {
  print(b) ;
}
Enter fullscreen mode Exit fullscreen mode

In the above examples we have two arrays which won't necessarily have the same length, if we calculate runtimes of individual loops both are O(N) with N being the respective array's length, However almost always we have to determine the runtime of entire algorithm and not its individual components, In such a scenario we can say the algorithm to have O(A + B) runtime, with A and B denoting the arrays respective lengths
There are also scenarios where there's ambiguity whether to add or multiply the runtimes to find total work done, Consider following example where for the same two arrays we print the sum of each element

for (int a : arrA) {
  for (int b : arrB) {
    print( a + " , " + b);
  }
}
Enter fullscreen mode Exit fullscreen mode

In this case for every iteration of A we do iterate through B, that is we do O(B) work for every A's element, In such scenarios we multiply their runtimes, hence total work done is O(A * B)

In other words:

  • If your algorithm is in the form "do this, then, when you're all done, do that" then you add the runtimes.
  • If your algorithm is in the form "do this for each time you do that" then you multiply the runtimes.

There's another confusing aspect to analysing Big O complexities

Amortized Time

Amortizing ignoring in the long run. There are certain algorithms which have a certain runtime which is its primary runtime, but every so often at predicted intervals the algorithm does other work which has its own runtime complexity which is its secondary runtime. Depending upon how often that extra work happens we can amortize the secondary runtime.

Consider the example of an ArrayList data structure, it is an automatically resizing array where we do not have to necessarily specify its size it can accommodate any number of elements unlike normal arrays where we have to specify size and cannot add elements the array capacity is met. If we look at the ArrayList implementation, Internally it has an initial size, if we try to add a new element beyond the initial capacity, the arraylist accommodates the new element by creating a new array with double its current size and copies the old array to the new array and then adds the new element.

e.g. if the current size of the arraylist is 100, and if we have added 100 elements, then upon adding the 101th element, the arraylist creates another array with 100 * 2 = 200 size, copies the 100 elements and adds the 101th element, it keeps on doubling its size to 400, 800, 1600 etc, once their capacities are fulfilled.

The copying and creating the extra array is O(N) work which occurs at increasing intervals, other than that the insertion itself is O(1) work.

Hence for such algorithms we can say it has an amortized runtime of O(1)

Top comments (0)