DEV Community

Cover image for What The Fork Is Big O
Pelumi Adebayo
Pelumi Adebayo

Posted on

What The Fork Is Big O

I'm sure you have seen me use the term 'BIG O' frequently in my articles. I figured some of my readers might be wondering what this term means. So yeah...

giphy.gif

What About It?

Just like we can do things differently to achieve the same goal, different programs can also be written to solve the same problem. But as a software engineer, you need to implement or choose the best solution, hence, a need to measure an efficient solution to a problem.

What Makes A Better Solution?

In software programming, we consider good solutions in terms of the resources they use, and in general, we are only concerned about two types of resources: The time they take to run and the amount of space they take in memory. These are referred to as Time complexity and Space complexity respectively. The best practice is to build the solution that uses the least amount of time to execute and the minimal amount of space in memory.

Now you might wonder, if we are only concerned about the time and space they take, why not use a time counter or stopwatch to measure the time, and also find out space consumed in bytes?

giphy.gif

Here Comes The Problem:

Differences in computer hardware and environment affect the execution time of a program. We can expect a modern laptop to be much faster in running programs than computers in the 80s. Even if we run the same program multiple times on the same computer, there will still be some variants to the amount of time it takes to completely run the program. This different behavior can be a result of a series of background services that the computer runs continuously. These background services have great tendencies to affect the execution of a program, therefore making it really hard to find out a consistent and exact amount of time that a program takes to run.

We definitely don't want our conclusion on execution time to be biased and subject to the computer used. Hence a need for a much more defined representation of the efficiency of a program.

The Solution: BIG O?

Big O Notation is a way to mathematically represent the time and space complexity of a program. We use Big O as a concept to judge which solution is better than the other with respect to the resources they use without subjecting them to any external determinant or concrete units.

This means that, with big O, we won't be measuring execution time in terms of milliseconds or seconds and space in terms of bytes, kilobytes, or megabytes. Instead, we use big O to analyze time and space complexity in terms of how it scales with the size of the input. That is; How well a program executes as its input grows.

It is also worthy of note that with Big O, we only care about the worst-case scenarios. ie, the worst-case performance of the program. The worst-case time complexity indicates the longest running time performed by an algorithm given any input size. Also, note that 'input size' is denoted by 'n', which represents the number of elements in the input data.

The different Mathematical notations of time and space complexity, referred to as big O notations are as follow:

  • O(1)
  • O(n)
  • O(log n)
  • O(n log n)
  • O(n^2)
  • O(2^n)
  • O(n!)

For a brief explanation:

O(1) pronounced as 'O of 1' is a denotation for 'Constant Time' complexity. This literally means that a program will run in constant(the same) time regardless of the changes or increase in the input size. For example, the same amount of time a computer takes to compute a program with an input size of 2 is the same time it will take for input size 1000.

O(n), O of n is a 'Linear Time' complexity. This means that the time it takes for a program to run increases progressively as input size increases. i.e, the algorithm runs 'n' amount of times.

O(log n) is a 'Logarithmic Time' complexity. This should remind you of your log table, right?. O(log n) basically means that time goes up linearly as input size goes up exponentially. So if it takes 1 second to compute an algorithm of 10 elements, it will take 2 seconds to compute 100 elements, 3 seconds to compute 1000 elements, and so on. Just like log 10 = 1, and log 100 = 2 etc. This usually happens when we have to repeatedly divide our input data into halves to compute.

O(n log n), the 'Linear-logarithmic Time' complexity is the combination of linear and logarithmic time complexity. Algorithms that repeatedly divide a set of data in half, and then process those halves independently with a sub-algorithm that has a time complexity of O(n), will have an overall time complexity of O(n log n)

O(n^2), pronounced as O of n to the power of 2, or Big O squared is a 'Quadratic Time' complexity. This means that it takes 2 times as much as 'n' to run a program.

O(2^n), 'Exponential Time' complexity denotes an algorithm whose time doubles with each addition to the input size. i.e, for an increase in input size (n), the time that the program takes to run doubles.

O(n!), the 'Factorial Time' complexity is what I like to call the 'Oh No!' complexity. This time complexity is the worst time an algorithm can take. Recall that a factorial is the product of the sequence of n integers right?. For example, the factorial of 5, or 5! is: 5 x4x3x2x1 = 120. This means that an algorithm that runs in factorial time, grows by multiplying by an increasing amount 'n'.

Depending on which one of the above your program(algorithm) time complexity falls into, you can immediately tell how fast or slow the program is going to execute. Hopefully, we get to look at some algorithms and analyze how to know their time and space complexity in subsequent articles.

With that, I hope you enjoy this article and learn a few things about big O. If you did, kindly like, share and subscribe here . If you are willing to look at some examples, I suggest you look at developerinsider article on big O with examples.

Happy Learning!

Top comments (0)